ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 30 June 2004

 

Software meeting at Stockholm on 30 June 2004

NB these minutes are just a brief summary of the main points and dont try to record all the detailed exchanges.

Timescales

We started by trying to summarise the timescales and our priorities. There will be an initial foray to CERN from 2-6 August to setup for the testbeam, followed by another trip at the end of August.

The first trip aims to integrate a "shard" of hardware in the testbeam software environment. The steps start with booting our CPUs, installing our software and integrating our configuration database. We can then try running our run controllers, reading same events triggered by a common L1A with the same BC and L1ID etc.

The initial visit may also be used to get concentrated help with moving to ROD crate DAQ. Initially we may use our existing run controllers. Can we run two controllers in the same crate, eg to access BUSY, LTP and maybe TTCvi via ROD crate DAQ, but RODs in the same crate via our controller? Would there be any problem with VME access, ie connections via the driver?

In preparation we should ask for an account on the testbeam systems to test our software installation first.

Tools

Norman suggested we list what tools we have available for diagnosis of the system and which do we still need. Nick remarked that often the first experience at the test beam is that after starting the run control nothing happens and detectors often have no tools to understand why.

Discussion under this heading included:

  • Diagnostic printout: we have quite a bit in our run control logs - is it enough?
  • IGUI panels: some statistics, but could do more for module status
  • Hardware display: in IGUI or existing HDMC panels
  • We could do with better tools for timing in readouts.
  • Event monitoring: existing debugging tools are OK for simple checks - but worry about monitoring at other points (ie apart from the ROS) blocking events?
  • We will need at least minimal event monitoring to produce histograms (see below).
  • We should learn to analyse recorded data. For a standalone partition we can write to CDROMs.
  • Analysing events: we should only implement decoding of the 6U formats that will be used in the testbeam.
  • Channel mapping (eg LAr/Tile cells to towers): only use flat files for the moment.
  • Ensure we have quick tools to set essential module parameters in the database (or PPM XML files). In particular to set all PPM channels in the XML file with the same parameters. (After the meeting this was checked and it can be done).

Monitoring at the test beam

Adrian presented a talk to open a discussion on monitoring.

Among the comments arising from the talk were:

  • We need to set a timescale and we should focus on the lower levels of Adrians hierarchy first. We dont need more sophisticated treatment of non-event data yet.
  • We should ensure compatibility with offline simulation in ATHENA and should try to define an abstract interface for byte stream decoding that can fit in with ATHENA or be run separately.
  • However its also OK to do something quick for the test beam and use it as a prototype for later.
  • Existing RoI byte stream decoding in ATHENA uses incorrect formats. These should be changed soon!
  • To help define the task we should produce a list of histograms we want to see at the testbeam from each subsystem. Categories we want may be:
    • "Standalone" L1Calo histograms
    • Comparison with calo, eg global sums
    • Detailed check of LAr/Tile cell sums to individual towers
    The last one or two items may only be possible using ATHENA.
  • We should quickly propose an architecture and Adrian should implement it with an example.
  • Given the list of histograms and the examples, someone from each subsystem should provide the monitoring software by a given deadline (to be proposed).

Other testbeam related issues

We need better bookkeeping and recording of configuration history. Maybe try OBK, or more frequent commits to CVS. (Discovered later: at the test beam the common configuration databases will be archived to CVS every night).

Are we still aiming for calibration studies with the LAr in October? We should talk to them again during our August visit.

Multistep runs

There was a test at RAL shortly before the Stockholm meeting which was fairly successful. Some database issues still needed to be fixed.

The multistep scheme is OK for soak tests, how about scans of parameters? Probably not needed for trigger menus. But will be useful for timing investigations, eg stability of Glink to RODs. Also when PPMs are available for timing scans of CPMs and JEMs as LVDS sink modules.

Documentation

Documentation status is a perennial agenda item, but what documentation do we really need? The suggestions raised were:

AOB

Next meeting(s)

To be arranged.


Last updated on 16-Jul-2004 by Murrough Landon