ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 29 January 2004

 

Monitoring discussion on 29 January 2004

Present: Adrian, Bruce, Murrough, Norman, Tamsin. Cano joined by phone towards the end.

Inclement weather prevented more participants from joining the discussions. Consequently it focussed more on explaining the online architecture, especially regarding monitoring and we spent little or no time discussion what we would do.

It was still felt useful to summarise the discussion in terms of what tools are (or might be) available, where we still have uncertainties about what exists, etc.

Part of the discussion centred around a list of questions, mainly from Norman, and their possible answers in (a) the online world and (b) the offline/ATHENA world. A summary of this is shown in the table below.

Some further explanations (some not much discussed at the meeting and some including my personal views) are added in later sections which also attempt to collect links to further information.

Table of monitoring questions

Monitoring Questions Online World Offline World
Simulated events? Could broaden the online API to read files or else pipe events from files into the Online monitoring system ?
Events structure/content? Not discussed? Not quite sure what this question means ?
How/where to get events? Via existing API in online monitoring package. We could broaden this to read files (see below) ATHENA services via Transient Event Store and Bytestream converters
How to get other info/parameters Configuration database, information service (and conditions DB?) Only conditions DB is available, plus jobOptions file?
Architecture of monitoring program We are fairly free to choose, but there is a proposal for a common skeleton/framework for testbeam monitoring applications Standard ATHENA job?
Histogram package/interface No standard? Best online support for ROOT Generic ATHENA histogram interface (can have ROOT underneath)
Display package Online ROOT base histogram display application (oh_display) exists in online online histogramming transport package ?
Task structure We prefer separate processes. Should be OK? Standard ATHENA job?
How to launch/terminate IGUI can start monitoring programs ?
Interaction with run control Proposed skeleton has RC integration n/a

Online monitoring architecture

The testbeam workshop last November has some useful talks about monitoring. Slide 7 in Chiara Rodas introductory "Monitoring strategy" talk in the Monitoring session shows the general architecture.

Proposed testbeam monitoring program skeleton

Also at the testbeam workshop, Roberto Ferrari presented ideas for a common skeleton or framework for user monitoring programs at the testbeam. The basic idea is to use a finite state model (FSM) similar to that of the run control, so that the user code in monitoring programs is entirely in callback methods executed at the run control state transitions or by an event loop.

There is an implementation of this used at the muon test beam in 2003 and also using some external (ie non-ATLAS) interprocess communication packages. This private development may be found on AFS at /afs/cern.ch/user/v/vanelli/public/monitor. There are versions with histogramming via HBOOK and via ROOT.

They plan to make this a more general framework "with dynamic loading, not linking detector code, but loading it at run time". They also think "some structured decoding and database handling of configuration and conditions parameters" is needed.

L1Calo monitoring API?

Towards the end of 2003 we discussed having our own API to obtain events for monitoring. In addition to monitoring events in real time we wanted to read events from files and also, since monitoring in the ROS was not implemented, from pipes.

One possible solution would be to write small programs to read files/pipes and put the events into the online monitoring system. Then the standard online monitoring API could be used to collect them.

Alternately we could provide our own abstract API, similar to that of the online, but allowing implementations to read files/pipes directly. A proposed API (FragmentIterator) was added to the infraL1Calo package for discussion (which never happened).

An implementation of this APU using the online monitoring API is in the eventMonitoring package. This package is provided as an example. The use of this API and the package structure to be used for our event monitoring is still to be discussed and agreed.

Online/offline commonality

Some subdetectors talk about same bytestream decoding in online and offline environments. There may be a problem with ATHENA services, package structure etc. Also some groups think the needs are very different (eg offline all kinds of calibration etc are applied).

The general feeling seems to be for us to try to keep things as common between online and offline as possible. We need to understand the offline bytestream decoding better...

Bytestream converters

In trying to find some descriptions of how ATHENA bytestream decoders are supposed to work, I found a few documents:

In general the offline uses bytestream converters in both directions:

  • from the raw bytestream (ROD, ROS, etc fragments) to so called "raw data objects" (RDOs), ie objects in the transient event store which simply contain all the data decoded from the bytestream. This direction of conversion is the one normally used in processing ATLAS event data.
  • from RDOs back to the raw bytestream. This is used to convert RDOs created by the simulation to bytestreams, eg for testing the high level trigger, etc.

In the offline environment there are bytestream converters already in place for RoI data only (and these are probably not completely implemented and surely not up to date). Conversion from RDOs to bytestreams (and vice versa?) is part of the TrigT1 package and its subpackages (TrigT1RoIB, TrigT1Result, TrigT1ResultBytestream). There is access to Doxygen documentation on successive ATHENA releases.

Next meeting(s)

We will have a further discussion on Wednesday 4 February at 14:30 GMT.


Last updated on 05-Feb-2004 by Murrough Landon