ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 2 May 2002

 

L1Calo Software video conference on 2 May 2002

Present: Bruce, Eric, Gilles, Murrough, Norman, Steve, Thomas.

Actions from previous meetings

  • Norman has updated his draft overview document of the DAQ. After further discussion today (see below) this should be published within our group for further comment.
  • Other actions are carried over...(as usual, sigh).

Apologies for absence

Oliver was not able to attend this meeting. He reported by email that he and Karsten have been busy developing the software and firmware for the MCM and PPrASIC tests. The hardware should be available in four weeks and they expect to be ready by June/July.

Murrough: databases

Murrough reported that he has made some further progress on the database. The layer which integrates the various component data (eg online configuration, IS run parameters, calibrations, etc) is now mostly implemented (for the CP system modules). User documentation is still required, but there is a reasonable amount of Doxygen documentation for the APIs. He and Bruce spent a day earlier this week trying to integrate the database and run controllers with the module services code. Some problems in the parts initialisation were found. Progress was also made in moving the new module services code into a set of CMT packages.

Bruce: module services

Bruce has been distracted by TDAQ week, but has made further progress moving his "looper" code into the new module services. only the DSS remains to be finished, though some disentangling of the component subpackages is required for a clean move to CMT.

The part initialisation problem seen when using module services from the run controllers is probably understood. We must document the innards of the PartManager to help us avoid this kind of trouble in future. He also remarked that increased use of shared libraries makes it more tedious to debug with gdb as you have to specify all the source paths explicitly.

Bruce hopes to put the updates into CVS very soon. It was felt that the HDMC updates (removing the old L1CalDaq) should only be done when the new module services is ready.

Bruce also reported that Markus Joos at CERN has found some problems when testing his industrial PC. When we (eventually) need a many PCI slot PC, we might instead consider some new standard PCs with two PCI buses.

Thomas: JEM

Thomas has been gaining experience with the DSS and using that together with a parts file for the JEM to test the JEM. The implementation of Submodules in the new module services will be very welcome in organising the JEM parts file. Like many of us, Thomas finds the HDMC mixing of byte, word and longword addressing confusing. At Mainz they are using the VMELinux VME driver. Now that Bruce has tested the CERN driver (though not thoroughly) we should move towards that.

They have now tested most of the board. What they now need is the ability to send in LVDS test vectors and capture readout data or RoIs in the DSS with the Glink daughtercard. With the present state of our software and the DSS firmware, this is not easy to do. For a first step, it would be possible with HDMC and a TTCvi to load a "single shot" of 32K slices of LVDS data and issue one or a small number of L1As to capture a few events in the DSS. Since our proposed solution to the so called "L1A problem" is not yet implemented, the events will have incorrect BCID and L1IDs so the DSS cannot be used to automatically check the incoming events. To check them with software would need the JEM simulation - however simple input data could be checked by hand to verify that the basic event format was correct.

For real time tests of the JEM, some of the test cards being developed for testing the CPM may also be useful to Mainz.

Bruce remarked that he needs to check that the LVDS source and Glink destination cards are properly implemented in his module services code. So far only the Glink source and Slink destination cards have been fully tested. He will also send example code demonstrating the use of submodules to Thomas.

Steve: simulation

Steve has completed the CPROD simulation and is keen to integrate this work with the CpRod test code. But he also has to finish the cabling document.

Since attending the DIG training session he has installed the Online software at Birmingham and has used the monitoring framework and the Event Dump to display events generated from his simulation.

He has also thought about the organisation of the local systems at Birmingham. The online software must be available on both PC and crate CPUs. It also needs a common filesystem, so a single shared installation of the online software and our own software seems best.The various systems should ideally be running the same version of Linux, but at least the compiler and libraries must be the same.

Gilles: CPM

Gilles has been rechecking his readout firmware for the CPM which is now being tested by Richard at Birmingham. He hopes soon to test their crate with VME-- access to both CPM and TCM in the same crate.

Norman: VME-- document

The completed VME-- specification has been circulated. We should probably have a review of this before making it an official EDMS document. The only worry expressed concerns the use of the very highest A24 addresses.

Schedule

The draft schedule discussed at the last meeting has been raised with the "hardware people". We aim for CP system software in early/mid summer, with exptension to the JEP in late summer.

Bruce: issues from TDAQ week

Bruce reported on some of the main points arising from the recent TDAQ week. He has written a longer summary which he will make available on the web. [URL awaited!].

One of the more significant developments has been the establishment of working groups discussing some TDAQ global issues such as (a) partitioning, (b) what defines a run.

Bruce also made the general point that we need to be present at these weeks, our feedback is useful to the rest of TDAQ and its important that we also present what we are doing in order to get the benefit of feedback from the rest of the community.

Thomas: calibration meeting

After difficulty with earlier possible dates, Thomas will propose either the evening of Thursday 23 May or else Monday 27 May as dates for discussion with LAr representatives. He will also ask about a possible meeting with TileCal on Wednesday 22 May.

At this point, the video conference with Mainz had to be curtailed.

Murrough/Norman: working group activities

Murrough reported that he has submitted comments on the Online software overall requirements documents. The main request is to consider the issue of extending the run state model. We should also suggest they work through use cases for various calibration scenarios.

The situation in the database working group(s) area is more confused: an original Online working group is now enlarged to cover wider issues that just the Conditions database API and is now a DIG working group. We have been asked to give (yet again?) our likely types, volumes and update frequency of database data and how we expect to access it.

Norman reported that the ROD crate DAQ document is now almost finished and will be distributed very soon. He thinks the specification of ROD crate DAQ it describe fits our requirements.

In the LVL1-LVL2 dataflow area, the interface document has been updated and passed to the PESA group for comments.

Norman: event format document

We are asked to comment on the updated event format document. The main changes for us are the extension of the event ID to 32 bits and a change to the format version to allow separate versions of header and payload. There have also been changes in the ROD source ID.

One unresolved question concerns the use of detector event type. We are considering using it for identifying types of calibration run. It (or a bit field within it) may also be useful to identify the successive steps in a multistep calibration run. The step number would need to be propogated to the RODs during the state transitions at each step.

Norman: overall DAQ document

Discussion on this was combined with brainstorming on test organisation. Norman should incorporate the key points from our discussion. He should also elaborate on what activities we need to do at each state transition. The tables in Software Note 001 may no longer correspond to what we now expect.

Test organisation

We had another brainstorming style discussion around the area of test organisation.

We started by discussing the new diagram in Normans overall DAQ document outlining the use of test vectors. We expect to manually create descriptions of the test vector inputs to be loaded into each module. The descriptions will follow the style established by Bill Stokes for the CPROD data. One such file describes the data to be generated for one module. No file implies no data required (eg the module gets its data from another module). The file or the configuration database should specify if data is only required for a subset of the input channels in a module.

A type of test run should be completely described by the set of these files together with the configuration database, run parameters, trigger menu and calibration data to be loaded into the hardware modules and their software simulation. The aim is to have a set of named tests which can be selected by the user. The ideal scenario would be if these could be selected from the IGUI - however it will be easier to implement them initially as static "partition" files which must be chosen before the DAQ is started.

The run control system, ie one run controller which must have its appropriate state transition executed first, uses these files to generate all the input and simulated output data for the whole system. This must be done in one place - it cant be distributed among crate processors because of the data transfer required in the simulation.

For the moment the generated input data and simulated output data will be passed to the run controllers in each crate via files. But the API should be designed so that the mechanism is invisible to the module services layer. The module services should be passed a data generator object which each module (and submodule) can query to obtain the data it needs to load. The initial implementation will be simple file readers as are presently used in the CpRod tests and in the simulation code.

The test description files will be defined in the database and linked to the modules. The run time database API will return the description file (if any) to be used for each of the modules in the database. Many modules may use the same test description file, but a separate generated data file will be created for each module and kept in a crate/module directory structure using the crate and module names in the database.

We also discussed the timescale and priorities for implementing this software. The database API should be established first. Bruce needs to complete his work on module services and, unless the requested DSS firmware updates are delivered very soon, has to move the "DSS kicker" program to the new scheme. Steve has to use the existing and new database API to configure the whole simulation. Since the CPROD test setup is an existing and well used system we should concentrate on that before extending it to the CPM/CMM (or JEM/CMM) systems.

Next meeting(s)

Many of us will be at CERN for the next DIG training on 23-24 May and there should be time for discussion then. Following that we aim for another video conference with Mainz on Tuesday 11 June. The UK end will be in Birmingham.

Actions

Actions from this and previous meetings:
  • Bruce: provide submodule examples to Thomas
  • Bruce: complete Module Service document and submodule syntax description
  • Bruce/Murrough: define required HDMC improvements
  • Murrough: write database user guide
  • Murrough: complete run control design document
  • Murrough/Bruce/Oliver: merge HDMC and other L1Calo CVS repositories?
  • Norman/All: complete the draft overall guide to L1Calo DAQ software design (especially regarding slice tests)


Last updated on 02-May-2002 by Murrough Landon