ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 6 July 2000 at QMW

 


Level 1 Calo Software Meeting at QMW on Thursday 6 July 2000.
-------------------------------------------------------------

The agenda was:

- Work on HDMC in the UK...........................Murrough (15')
- HDMC progess/plans in Heidelberg.................Cornelius (15')
- General overview of DAQ work.....................Murrough (10')
- DAQ requirements for slice tests.................Norman (10')
- Recent DAQ work in the UK........................Bruce (20')
- Programming models?..............................Murrough (5')
- Use of TTC system................................Murrough (5')


Work on HDMC in the UK [Murrough]
---------------------------------

Murrough presented the work done on HDMC in the UK.
His slides are here.

The main features include:
- default.conf now split into a framework and several #include'd bits
  with a Makefile to concatenate them.
- register descriptions and parts files have been added (ROD) or modified
  (TTCvi, DSS)
- busserver now handles bus errors from the Bit3 card used at Bham
- a regbuild program now exists to create C++ ModuleRegister subclasses
  from selected register descriptions in the default.conf file
- Registers now have a real verify_me() method which writes and reads
  back several test patterns
- Work on the ModuleView is progressing, but Qt 2.1 has exposed some
  layout bugs and it is also clear that other clean up of register
  widget code is desirable


HDMC progess/plans in Heidelberg [Cornelius]
--------------------------------------------

Cornelius described the changes to HDMC made by Heidelberg.
His slides may be found via here.

New and modified parts include:
- the DataSource can now generate realistic LAr calorimeter pulses
- a BusScan part can scan VME space for valid address ranges
- a new CrateView part shows modules in a crate (requires Qt 2.1)
- a comparator IOFrame can compare two datastreams
- a NetBusPrimitive provides the NetBus protocol without bus errors (for OS9)

On the framework itself:
- "pure" GUI views now have persistent state
- wait and repeat commands have been added to the internal scripting

The documentation now includes a new tutorial on creating a part.
Volkers diploma thesis also includes a chapter (in english) on HDMC.
The HDMC paper has been accepted for oral presentation at LEB.

Cornelius has looked at what DAQ -1 provides and foresees no major problems
in integrating it with HDMC.


General overview of DAQ work [Murrough]
---------------------------------------

Murrough gave an overview of the various strands of DAQ activities.
His slides are here.

The UK group has at various times discussed both long and short term DAQ
issues. For example, we started work on the Use Cases for the final system.
These still need to be fleshed out more fully. It is clear though that the
"normal DAQ run" case is the one least likely to be important in the medium
term (slice tests).

We do have a concensus on using a CPU in each crate, but more work is needed
on defining the high level services to be provided by eg a Crate object.

For the slices tests, we know we want to exercise as much of our hardware
and software (including DAQ -1) as possible. There are many DAQ -1 areas we
have not even investigated yet.

For the immediate future (ROD tests), we have decided to use the core DAQ -1
backend services while - for the moment - reusing or wrapping existing
dataflow code. However we have not yet settled on a model for the Module
class or how the run controller will communicate run state to its child
processes (producer and analyser).

Some work has been done, notably on the collection and display of histograms
using ROOT from distributed sources (eg several analysers). We have also
started work on the run controller, integration of bits of the "legacy" DAQ
and a simple GUI for accessing data in the DAQ -1 IS component.

Murrough also mentioned some issues which have been dormant for a while.
These include TTC software (in abeyance since Kithsisr left), the database,
calibration procedures. We have also not been following progress (if any)
of ROD crate DAQ; and we have not been very good at producing or storing
documentation.

Lastly the issue of personnel: we have been losing, or are about to lose,
more people (Kithsiri, Tara, Volker; Scott) than we have gained, or are
about to gain (Bruce; 1/3 of a new RAL post).


DAQ requirements for slice tests [Norman]
---------------------------------

Norman spoke (without slides) about the software requirements for the slice tests.
These are apparent from the requirements for the slice tests themselves.

We should only expect "normal" DAQ running conditions towards the end of the
slice tests; mostly we will have a "calibration" like environment.

The slice tests will only succeed if all the components are working separately
beforehand. Thus at least some of the software is required beforehand.

The typical calibration like activities are generally of the form:
setup initial conditions; take N events; analyse and update database.
This is done for all channels in parallel, so the DAQ environment is the most
suitable (rather than some single program).
Compared with normal DAQ, where the monitor/analyser component is optional,
in the calibration regime the analyser is essential and is what acquires all
the useful data.

Norman originally wondered whether having a top level sequencer to issue a
series of runs via the run controller was the right approach. But this has
a large overhead in start/stop state changes.
A better idea would be to have a run controller like object which can itself
handle a sequence of operations.
But what services does this provide/require, eg access to calo services, DCS?

Norman also touched on some other issues:
- for the slice tests we will need a separate CPU per crate (VIC,Bit3 will
  not be appropriate)
- we need to learn about the _software_ viability of our DAQ architecture
- do we need CPUs in the RODs themselves (as opposed to the ROD crates)
  eg for doing some of the calibrations?


Recent DAQ work in the UK [Bruce]
---------------------------------

Bruce presented details of the DAQ developments in the UK
His slides are here.

We need a new DAQ system for the slice tests and leading up to them by
way of ROD tests with the DSS. This will be the prototype for the full
final DAQ system. The immediate approach is an OO reworking of the old
"legacy" DAQ code in a DAQ -1 framework.

The legacy DAQ is still required for the histogram and hardware databases.
The menu driven interface to these can be started from the new DAQ system.
Bruce also showed how the old buffer manager (PBM) can be wrapped in a
suitable C++ class.

Of the new code, one building block is the ROOT based histogramming
developed by Tara from CDF examples. This includes histogram production
into a ROOT file or networked histogram database which can viewed by
a presenter based on ROOT GUI classes.

Other building blocks include DAQ -1 components and HDMC for the hardware
access. Events will be stored in the ATLAs event fragment format.

Bruce completed his talk by showing some code excerpts from tests of his
ideas for the new producer and analyser programs: both using the HDMC
concept of modules as collections of parts.

Caveats (ie work to be done) include finding a memory leak in the histogram
code, completing the producer skeleton and including DSS, TTC, etc in it...


Programming models [Murrough]
-----------------------------

The slides for this talk are here.

We are now starting to produce our prototype modules - the specification (and
programming model) for the CPM has already been published.

We should aim for as much consistency between the different modules in the system
as we can manage.

Murrough reiterated the common guidelines such as all registers being readable,
no write only registers etc. All modules should have a module ID register in
a common format. In the discussion it was agreed that all FPGAs should include
a register to report the firmware version loaded into each chip; modules with
daughtercards should provide identification of which type of daughtercards are
present (with daughtercard IDs if possible).

Other comments: we should use the VME "endian" convention; also we will want
to define how we may use the VME64x CSR space for geographical addressing in
ROD crates.


Use of TTC system [Murrough]
----------------------------

The slides for this talk are here.

Continuing the theme of aiming for consistency, Murrough made some suggestions
for common use of the TTC system on our modules. For example, we should use
the same programming model for the TTCrx chip everywhere. It seems sensible
to allow for the use of TTC broadcast commands on all modules (even if there
is no identified need yet). This implies that the six TTCrx command lines and
eight subdaddress lines should be connected to the VME PLD (or whatever
controls the module).

The list of possible TTC broadcast commands we might conceivably use includes:
- start/stop playback mode (global, PPM only, CPM only, JEM only, CMM only)
- start/stop backplane timing calibrations? (global, CPM, JEM)
- start/stop/zero rate histograms (PPM only?)
- start/stop readout pipelines
- restart LVDS or Glinks??
- start/stop/reset pipeline bus??
- start/stop other diagnostics??


Last updated on 11-Jul-2000. Send comments on this page to Murrough Landon