ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 26 January 2004

 

L1Calo Software phone conference on 26 January 2004

Present: Adrian, Bruce, Cano, Dimitri, Florian, Gilles, Jürgen, Murrough, Norman, Stefan, Steve, Thomas.

Software progress reports

Gilles has tried running the L1Calo software compiled with gcc 3.2. He observes some timeouts in the run control transitions. It was suggested this might be due to some remaining BinaryTag attributes in the database still being the gcc295 variant.

He has made a few updates to cpmServices to reset parity error counts and to cpmTests so that the GlinkKicker can now read more than one CPM.

Florian has made a few changes to the PPM register model. A new homebrew system has been constructed. The CERN VME driver will be checked next week. Meanwhile Florian is working on the database with Cano.

Cano is now using the DHCP server to boot their systems. He has also tried using gcc 3.2 compiled software - and may also have seen problems related to the BinaryTags.

He has also tried multistep runs (to run soak tests reloading different test vectors). However he still needs to stop the JemKicker program resetting the IS variable errors counts at each step.

Jürgen, together with Steve, has made a number of changes to JEM and CMM simulation. There is now a DSS file generator for the JEM and the JEM playback generator can now automatically convert physics test vectors files in Jürgens old format.

The generator for CMM (energy) playback data now takes more settings from the database. Steve and Jürgen also provided a test implementation of the as yet undocumented CMM Energy Slink data format.

Jürgen will provide more details in a note to the at1soft list.

Murrough has been making changes to the L1Calo database code to cope with the new database style in the Online software. (The new style is available with online-00-20-00 and will be required under the forthcoming online-00-21-00). This is nearly done now. A few changes will probably be required in our other software, though some of the changes are only to remove unused references to obsolete classes. This can be done already.

Bruce has speeded up test vector reading in dssServices. Further improvement may be possible. He has also the TTCvi BGo setup to stop the L1A generation at the end of run - though this may change anyway with new DSS firmware.

He has also successfully tried running the gcc3.2 compiled software and reports that, after changing all BinaryTags, that with gcc3.2 the transitions appeared about 15% faster (but without including a CPM). If this can be reproduced at other sites we can all move to gcc3.2.

On the ROS front, Bruce has got the HOLA/FILAR combined working but needed to move to online version 20 and dataflow version 06. The L1Calo software needs a few local modifications to work with these versions but this has been done. However it appears that the ROS still doesnt provide events to the online monitoring. Bruce is confirming this with the ROS developers.

Bruce also reported that Weiming is now working on the "neutral" format ROD firmware. Steve pointed out that extra support in the database is needed to switch between standard and neutral formats in the simulation.

Migration to new Online/Dataflow versions

We are presently using online-00-19-01 and DF-00-05-00. In the autumn new versions of both (online-00-20-00 and DF-00-06-00) were released for developers. Further versions, intended as testbeam (pre)releases are expected soon. The new online release (online-00-21-00) will require use of the new generated database libraries and will also have a new state model for the run control.

We have to migrate to the new versions, the question is when? We are already using online/DF 20/06 at RAL, but for other sites it may be best to wait at least until online release 21 is out (very soon). We can then check if we can run that with DF 06 or if we have to wait for DF 07 (probably a month or so away). Apart from the ROS which we only have at RAL, we only use the VME drivers from the dataflow release so this may be possible.

If thats the case, we can freeze our present software, make a final release of it and then change to the new database style and run control state model which will require another update of the module services interface. These changes are provisionally scheduled for the week 16-20 February after the next set of JEP slice tests.

Discussion on monitoring

We had a discussion on various areas of monitoring, mainly concentrating on hardware monitoring and online event monitoring. These notes try to summarise the concensus.

Hardware monitoring

We have some limited hardware monitoring already. The run controller requests module services to fill IS module status variables, one per module, which are then published in IS. The IGUI displays these numerically. The existing IS status classes, which are generated from a schema file, contain only a few attributes. To add more, eg arrays of link status and errors requires the IS classes to be hand edited. The extra data could still be displayed (eg graphically) in IGUI panels or by separate applications.

It was felt best not to have to run HDMC during a run, so the only application accessing modules via VME would be the run controller. But it would be useful to have the kind of detailed status information seen in HDMC panels (eg for the JEM) available in the IGUI. This should be available after the Load step (at present it is only available after the Start transition).

The various module services developers should make suggestions for what status data they would like in IS so that extended IS classes can be developed.

Online event monitoring

Online event monitoring divides into three separate tasks:

  • collecting ROD, ROS or other fragments from either the online monioring service or from files (or a ROS pipe)
  • decoding the fragments to provide access to the fragment contents, eg CPM towers, jet elements, hits, etc
  • analysing and histogramming using this decoded data

We may have several analysis/monitoring programs but they should all use a common decoding package, so the decoding has to be separate. The decoding should insulate subsequent code from knowledge of which event fragments (eg ROD, ROS) were being processed and also which ROD data format (neutral or module specific) was used.

We first need to agree an interface for the decoding and then implementations for CPM, JEM, PPM data can be developed by the relevant institutes. A preliminary brainstorming on this will be at RAL on Thursday 29 January. Conclusions from this should be distributed for comment with a further discussion suggested for the week of JEP slice tests (9-13 February).

Offline event monitoring

At the testbeam and for calibration with the calorimeters we need to develop offline analysis and monitoring of our events as well. Although the environments are quite different we should try to keep developments synchronised and with as similar a philosophy as possible. Eg the online decoding API should be available offline. We should try to include people with ATHENA expertise in our discussions.

Other software for slice tests and the testbeam

For JEM tests it would be useful to obtain the LVDS phase and delay settings of the JEM inputs from the database. This also means being able to store them. Murrough and Cano should communicate about this.

Norman would like to develop multistep runs for CMM timing calibration. He and Murrough will talk on Thursday. Steve felt that multistep runs were not needed for CPM calibration - though perhaps calibration of input data from PPMs might benefit from this?

AOB

Bruce has new DSS firmware from James with the changes to the DSS behaviour to TTC broadcasts. This will mean a change to ttcviServices.

Since this reflects the decision on TTC broadcasts, Murrough should update the TTC software note accordingly and distribute for comment by module firmware designers. Norman pointed out that the 6U and 9U RODs may respond differently to TTC broadcasts and need to be treated separately in the document.

Next meeting(s)

To be decided.


Last updated on 26-Jan-2004 by Murrough Landon