ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 21 July 2004

 

Monitoring discussion at Birmingham on 21 July 2004

Present: Adrian, Alan, Bruce, Gilles, Jürgen, Murrough, Pete, Steve.
By phone (part time): Eric, Norman.

NB these notes mainly summarise what was written on the whiteboard at the meeting with a few comments from memory.

Management

The meeting started with a discussion about management of both online and "offline" (ie in ATHENA) monitoring and who will be involved. The outcome will be circulated later.

Required histograms for the test beam

Norman suggested we needed the following information from the test beam:

  • LUT outputs from the PPM to look at the energies
  • FADC samples to look at pulse shape and timing
  • Matching of PP data with calorimeters
  • Publicity plots
  • Comparison of CTP with L1Calo output
The first two can be done standalone. The last three need built events.

The specific suggestions for sets of histograms covering the first two points above are:

  1. PPM LUT output: separate histogram for each channel
  2. Sum of (1) for all channels connected to LAr calorimeter
  3. Sum of (1) for all channels connected to Tile calorimeter
  4. Sum of (2) and (3)
  5. PPM FADC samples: separate histogram for each channel refreshed every event - to find L1A timing
  6. Scatter plot of FADC peak vs LUT output for each channel - as a BCID check
  7. Plots of disconnected channels - for noise estimate
If the PPM readout is not available, the LUT outputs can also be seen directly via the CPMs. The CPM can also be used to read half the PPM FADC channels by setting BCMUX to pass through mode and disabling the peak finder.

Architecture

Although we need monitoring in the ATHENA for comparison with calorimeters and the CTP, it is clear that we could do some useful monitoring in a standalone framework which we could get going very quickly. Adrian and Steve are keen to start immediately.

Adrian reminded us of the architecture he had proposed at the software meeting in Stockholm. Briefly this suggests a 1:1 decoding of ROD fragments into simple objects, followed by a mapping service creating another set of objects with later stages including making sums and running algorithms.

We had some dicussion about the nature of the simple objects. The outcome is summarised below. For the moment the mapping should be done in the analysis. When the ATHENA based byte stream decoding and monitoring gets under way we may try for some convergence in higher level more complex objects.

Raw Data Objects

The requirements for histograms reinforced various earlier discussions at Stockholm and before about the nature of the simple decoded objects (Raw Data Objects or RDOs) that we need to make.

  • One object contains a single tower or jet element, etc.
  • This object contains all the time slices for that quantity.
  • It also contains the error and status bits related to the tower or jet element value (eg parity error, BCID flags, etc).
  • The address will be the crate, module and (eta,phi) within the module (mapping to calorimeters not done at this stage)
  • There will be a separate container for all the objects of one particular type (eg FADC sample, LUT output, CPM tower, etc).
  • There will be separate types of object for hits or energies from whole modules and for various CMM quantities and for error flags relating to a whole module.
  • There should be a common base class, probably with attributes of crate and module address and data value. The separate data types in the system (CPM tower data, CPM hits, JEM inputs, JEM hits, JEM energy sums, PPM FADC samples, PPM LUT outputs, various CMM inputs, sums and CTP outputs) will be generally be distinct subclasses.

Monitoring Program Structure

Steve outlined a pseudocode structure of the steps in a monitoring program - to be implemented in Adrians monitoring framework.

  event loop {
    getEvent()
    splitEvent()  // into list of RodFragments
    decodeFragments() {
      create RdoContainer
      RdoContainer.digest(RodFragment)
      check fragment header
      decide which decoder
      Decoder::decodeCpmData(RodFragment,
                             vector<CpmTowerRdo*>,
                             vector<CpmHitsRdo*>)
    }
    doMapping()
    analyse()    // ie fill histograms
  }

Packages and People

The getEvent and splitEvent methods above are already available via our fragment interface and Bruces implementation of it (eventMonitoring and fragmentSource packages).

Steve will develop the byte stream decoding in a new bytestreamDecoder package. He hopes to have some initial offering by friday!

Adrian will develop the monitoring program, doing histogramming with outputs from the bytestreamDecoder in another package called protoAnalysis which Steve has already created for the purpose.


Last updated on 22-Jul-2004 by Murrough Landon