ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 7-8 November 2001 at RAL

 

L1Calo Software Meeting at RAL on 7-8 November 2001

The meetings took place over two days, with varying numbers of people attending.

Discussion on HDMC issues

Bruce, Murrough and Oliver discussed a few issues relating to HDMC prior to the meeting.

Reorganisation of the HDMC. We dont want to make major changes to HDMC before the slice tests. But we should agree which classes we consider form the "Hardware Access Layer" library. We should also think whether some classes developed for the demonstrator systems should be declared obsolete and not maintained in future.

Small changes we should make include providing the full set of operators for the Address classes. We should also ensure we have a full set of basic types and use them throughout our software.

Actions from previous meetings

Many of the actions are carried over - see below for the full list.

Regarding software process, we should see what emerges from the recommendations to be make at the NIKHEF workshop next week and discuss them at a future meeting.

Although Murrough has not exactly documented his existing run control prototype, he has started a new document which focuses more on our emerging concensus.

Readout Issues

Norman raised a few readout related issues which he will dwell on again at a talk later during the meeting.

We have previously imagined that we need proper ROBins to run our system flat out at 100 kHz. In this case we would have to devise a mechanism for monitoring a fraction of the events received, since the maximum rate at which we might be able to analyse them is probably only 1 kHz. However we could instead run in bursts of say 100 events at 100 kHz, separated by gaps which would allow the ROS (now equipped only with Slink PCI cards) to catch up.

Is there anything we cant test properly in this scenario? Purity of data on the links (ie bit error rates) must be tested separately in any case.

Questions we need to check on: how much memory do we have in our RODs (CP and PP) to store events? Can we device a scheme to synchronise L1accepts with the LHC clock (and particular time slices stored in playback memories)?

DIG Wishlist

We have been asked to produce a wishlist for the DIG forum at the NIKHEF workshop. We shouldnt ask for anything we arent sure we really want. We should also state our requests in terms of wanting a solution to our requirements, rather than a particular implementation (eg ROS) which we may think we currently want. On the other hand, we do know that the ROS software does exist whereas ROD crate DAQ is still being discussed.

If asking for the ROS, we will want a single PC ROS with up to 9 Slinks. We need to install and maintain this outside of CERN and need proper documentation and support.

Other issues we might raise:

  • Behaviour of the IGUI regarding run types. Should it be possible to change it at the start transition?
  • Does the Online group have a bug reporting mechanism? If so, could we also use it internally?
  • For calibrations, the run control has no concept of a run taking place in multiple phases - no neat way of flagging the events from different steps in a multi point calibration
  • What should be done at the stop transition regarding waiting for events in pipelines to be flushed?

Putting the pieces together for module tests

As the CMM is likely to be the first major new module to be delivered, Norman wanted an overview of where we are in putting together the necessary software components to test it.

He would like to write a "human style" description of input test vectors (like Bill Stokes developed for CPM test vectors). This then needs to be expanded to make complete input test vectors. These need processing to generate the expected outputs using Steves simulation package. The environment (eg trigger menu and calibration settings) used in simulating the system needs to be kept together with the test vector input and outputs.

The input test vectors need to be loaded via the run control and output data collected eg via the ROS and made available to a monitoring program to check against the expected data. The run control system needs to be driven through a number of steps via a calibration sequencer of some kind.

None of the above elements is yet available completely. Bills test vector description language is specific to the CPM and would need extending for the CMM or other modules. Steves simulation doesnt yet include the CMM, so that will have to be developed.

The prototype run controllers dont yet talk to real hardware module classes. We also dont yet have our hands on the ROS. We havent yet tried testing the monitoring interface though it is expected to be fairly simple. Also, the calibration (or test) sequencer should be easy to develop from an example program in the Online software distribution.

But it does all have to be done. Norman expects to do the work on the CMM simulation. Murrough will test the calibration sequence idea. We will keep up pressure to get the ROS. In the absence of the ROS, we could try pumping data from the Slink reader into the monitoring system as a test.

Status and plans at each site

This will be covered by parts of the talks at the main meeting. Murrough reported that Stockholm may have two diploma students available to work on a short term, well defined C++ project. Adapting the CPM simulation to the JEM may be a good task.

Online software overview and demo

Murrough gave a presentation on the online software, briefly summarising the various components with a couple of notes regarding installation. This was followed by a short demonstration of the IGUI, database editing tools and a few details of our run control prototype for the slice tests. People were encouraged to install the latest version of the Online software at their local sites.

Module Services

Starting the thursday morning session, Bruce introduced his draft document on the design issues for the module services layer. These are driven by the requirements arising from the five or so other packages which depend on the module services and the constraints coming from the packages on which module services depend.

A few key points:

  • Some modules come with several FPGA flavours. The software needs to check it has created a consistent incarnation.
  • Are there any special considerations in the PP system coming from the pipeline bus - ie an alternative bus by which the modules may be accessed?
  • Synchronisation issues and access to the modules via the TTC system.

Also, we all need to consider which specific services are needed for each of our modules to ensure we arent forgetting anything.

Run Control Design

Murrough then showed a few slides of UML diagrams describing the current ideas for design on the run controllers and in particular their relation to the module services package and the underlying HDMC based layer. We dont propose to modify the basic HDMC parts manager for the moment, but we do want to maintain coherence between the DAQ and the interactive diagnostics. We therefore want to establish a way to initialise hardware module classes from HDMC parts files, while deriving the configuration of modules in the crate from the Online database.

Simulation Package Tutorial

Steve gave us a tutorial on the simulation package. He described the main classes and gave examples of where users of the general framework would need to develop their own subclasses. His CPM simulation could be copied and modified to provide much of the JEM simulation.

One unresolved issue is how to deal with Level1 accepts (L1As). These are not important for the real time path, but a suitable mechanism for associating L1As with particular timeslices of the input test vectors is needed to produce the correct readout data. But then how would we achieve this synchronisation in the real hardware?

Progress with the ROS

Bruce presented the status of his use of components of the ROS. Use of the basic Slink receiver is now straightforward, but the major hurdle in this area continues to be lack (despite requests) of a formal distribution mechanism and documentation to simplify (enable) the installation and configuration process. As a result, a complete "collapsed ROS" implementation has not been tested.

System setup for the slice tests

We discussed a common system setup for the slice tests, including the prerequisite software. Some of this is started on a couple of web pages on the software website. In particular there are pages on the prerequisite software we need to compile and run our software and the online software. There are also some suggestions for the operating system setup, including a list of required RPMs.

AOB

The ATLAS offline community is expected to choose the CMT tool for managing all its software packages. The next Online software release will be made using CMT instead of SRT. When they develop their working model, we should consider adopting it.

Actions

Actions from previous meeting(s):
  • Norman: find/define remaining data formats
  • Steve: documentation for simulation package
  • Steve: consider organisation of test vectors
  • Murrough: circulate instructions for using the run control demo
  • Murrough: investigate CodeWarrior code development tool
  • Bruce/all: investigate other software development tools
  • Murrough: document run control prototype package
  • Murrough: update HDMC changes document with recent suggestions and include the proposed long term strategy
  • Bruce: identify run types in the present CPROD tests
  • Bruce: complete draft document describing Module Service package
Actions from this meeting:
  • Bruce: contact Juergen Hannappel regarding his VME driver and investigate the Concurrent driver (include in HDMC?)

Next meeting

The next meeting will be arranged by email...


Last updated on 16-Nov-2001 by Murrough Landon