ATLAS     Level 1     Calorimeter Trigger     Software    

L1Calo Software Minutes: 30 January 2003

 

L1Calo Software meeting on 30 January 2003

Present: Bruce, Cano, Christian, Gilles, Murrough, Norman, Steve, Thomas.

A lot of software progress was made in the week before this meeting during the Mainz/Stockholm visit to RAL. The details were reported via email and were not all discussed in detail during the meeting.

JEM software status: Cano

Mainz are still waiting for the return of their modules after the RAL visit and Cano has been involved in teaching so there has not been any change since the RAL visit (where many aspects of the JEM module services and new test vector readers were tested, developed and debugged).

The next step is to perform a delay scan of the inputs, varying the TTCrx deskew1 clock with constant LVDS inputs from the DSS. Gilles has code to do a similar standalone scan of the CpChip timing. This is available as the CpmKicker program (run via the run control) in the new cpmTests package.

It was felt it would be better to try and move this further towards the "proper" style before proposing it as a model for the JEM. It presently relies on a private modification of TTCvi software and also would only work in a "single crate" setup (which would be OK for the Mainz bench test, but not later at RAL). However the JEM timescale is fairly short, eg end of next week.

Cano will also work on cleaning up the present test vector code and implement automatic checks on the outputs.

CPM software status: Gilles

The CPM tests are now all running via the run control and some long scans have been done.

The next steps are to test more LVDS inputs using a DSS with two LVDS daughtercards. Scans with this also need the sequencer in the run control and ideally a nicer way to select different tests without editing the database. Tests of the Glink output require the DSS software for the Glink sink daughtercard (which exists in principle but is untested).

DSS: Bruce

Bruce has rewritten some of the DSS module services to allow two daughter cards of the same type to be handled properly. This was more difficult in his previous (more elegant) design. The previous versions of both moduleServices and dssServices packages have been tagged and new code will be committed soon once it has been tested.

The TTCvi services have been updated to use new developments in the database (see below) to configure broadcast commands to configure TTCrx chips on other modules. Completion of this has awaited the database code (which is now reported to work).

When the run control sequencer is available for use, the ROD test kicker will be migrated to use it.

Database and run control: Murrough

Murrough summarised a number of recent changes to the database. These include support for the TTCvi to configure the TTCrx chips on modules without VME/I2C access to the TTCrx. After a few problems were fixed this should now work. The database can now define the TTC address of each module. Also the firmware configuration of modules and descriptions of FPGA programs can now be specified.

The sequencer program now reports its current step in an IS variable. It can be configured via the IGUI, but at the moment it still has to be started by hand. More work is needed in the run control area to start it automatically for certain run types.

A few problems with the run control were reported. There should be more output in log files when an exception occurs. It also seems that IS is always not updated by the IGUI (without much selecting between L1Calo panels) and MRS error messages from our run controllers dont always appear. The reasons are not clear and must be investigated.

CMM software status: Norman

Norman has recently started again on the CMM software having received the updated programming model and the module itself from Ian. He has updated the parts file and will then move to complete the module services and simulation.

Simulation: Steve

Steve has mostly been working on improving the code for CPM tests, eg adding more support for the CP scan path.

He has had some feedback from Sam concerning the work on the JEM simulation. The students are now compiling their code but its not yet finished.

Also, Paul Hanke would like a shift register implemented in the simulation for use in the PPM simulation, but Steve needs more details of Pauls requirements first - perhaps by phone.

Mainz/Stockholm visit to RAL: follow up

Norman will put up on the web the detailed test plan developed for the JEM tests and the next steps. The next visit is still planned for the end of February - just before the Mainz joint meeting.

Once again we discussed the desirability of a detailed test plan for the subsequent subsystem and full slice tests. This requires input from a wide range of people and should perhaps be on the agenda of the Mainz meeting.

Norman will look into providing accounts on the RAL bastion server to allow Cano and others to access the RAL setup from their home labs.

Monitoring via the ROS

We were recently asked if we really needed to have monitoring of events available in the ROS. Bruce, Murrough and Norman sent a collective response basically saying we did need it (or some very similar supported solution for collecting events from our existing Slink hardware) at least for this year.

At this point, the video conference with Mainz ended.

Timing calibrations

We tried to list the detailed steps required in performing the timing scans we need. We know these details for the CPM and assumed the JEM is similar.

As part of the general discussion it was felt that the existing state model of run control implemented by the Online group is not very conducive to running complex series of operations where you need to do a number of distributed operations in a particular order. But pending any radical rethink by the Online group, eg towards a more command oriented model, we tried to see if we could shoehorn what we need to do into the existing infrastructure.

For the CPM there are two scans: (a) the capture of LVDS inputs by the serialisers using the deskew1 clock and (b) the capture of multiplexed 160Mbits data by the CpChip using the deskew2 clock (both on board and via backplane). For the JEM these are similar, but the main FPGA only uses the deskew2 clock for backplane data).

The steps for (a) are as follows:

  1. Before the run starts, load a calibration pattern into the DSS of alternating 0x5555 and 0xaaaa. [RC: load()?]
  2. For each step, set deskew1 clock phase (in 104ps intervals) (presently must be done via TTCvi, later by each CPM itself) [RC: pause()/load()?]
  3. Reset DLLs on the CPM [RC: pause()/load()?]
  4. Tell CPM serialisers to calibration [RC: pause()/load()?]
  5. Read the serialiser spy memories to see if the calibration pattern is being read correctly or has errors ["kicker"]

The steps for (b) are as follows:

  1. Before the run starts, load a calibration pattern into the CPM serialisers. [RC: load()?]
  2. For each step, set deskew2 clock phase (in 104ps intervals) (presently must be done via TTCvi, later by each CPM itself) [RC: pause()/load()?]
  3. Reset CpChip DLLs on the CPM [RC: pause()/load()?]
  4. Tell CpChips to calibrate [RC: pause()/load()?]
  5. Reset CpChip scanpath [RC: pause()/load()?]
  6. Read the CpChip scanpath to see if the calibration succeeded or has errors ["kicker"]
All but the last step in each sequence should be done by the run controller at a suitable state transition. The only trouble is the startup where either we load the DSS/Serialiser memories at the load transition (instead of configure which we would normally use) or else the timing setup and resets must be done at the configure transition (instead of load).

Module Services API

The use of run control sequences for making timing calibrations requires the use of pause/resume transitions which are not yet implemented in the DaqModule and associated APIs. These are pure virtual interfaces all of whose methods have to implemented by subclasses, so changing the API will break a lot of code. We should try to do this only once so we also discussed other changes we might make at the same time.

Mainly this is to add a method to return/fill a block of status information. The run controller can then copy this into IS. Although we hoped to avoid direct dependence on IS code in module services, we eventually realised we already had it via the database objects. The provisional suggestion is for each DaqModule subclass to have its own private status block as a data member (subclass of L1CaloModuleStatus). This would be returned by base class pointer:

class DaqModuleSubclass: public DaqModule {
public:
  // Fill m_status and return const pointer to it.
  const L1CaloModuleStatus* updateStatus();
private:
  L1CaloXXXStatus m_status;
}

Run control sequencer, run types and selection

For "kicker" type programs to follow the sequence of steps in the scan they need to subscribe for changes in the IS variable updated by the sequencer. Murrough should provide some example code.

A larger questions is how to select and control various types of timing and other scan in a general and simle way.

Putting all the settings into the IGUI could be difficult and confusing (many settings may need to be changed together). The likely alternative is to define in the database named groups of settings appropriate to one type of run and then allow the user to select the group by name from the IGUI. Any changes to the settings (eg even to change number of steps and the increment) would mean editing the database and perhaps creating a new named group.

Murrough will try to implement something along these lines (while not preparing or attending the database workshop!).

Database workshop

Murrough is giving a talk on our database requirements at the forthcoming database workshop. This seems mostly focussed on the conditions database, but we can try to raise some issues we have with the configuration database.

Murroughs draft talk draws on a web page produced by Thorsten after discussion with Murrough and Ralf. It mentions our expected data volumes, the kind of things we want in the configuration database, storing of firmware binaries in a similar style to the conditions database. Also the need for a local implementation for test setups.

We should also ask how the conditions DB treats the conditions for sequences of runs or runs with many steps.

Next meeting(s)

To be arranged. Maybe just during the next Mainz/Stockholm visit to RAL and then at the Mainz joint meeting.

Actions

Actions from this and previous meetings:
  • Steve: send Cano test vector reader examples and advice, also describe how the "Bill files" are used in CPROD tests
  • Bruce/Murrough: write and circulate proposal for handling multistep runs in DaqInterface and run controller
  • Bruce: complete Module Service document and submodule syntax description
  • Bruce/Murrough: define required HDMC improvements
  • Murrough: tackle project management issues: eg schedule (Gantt chart, stages of evolutionary delivery), risks, etc
  • Murrough: write database user guide
  • Murrough: complete run control design document


Last updated on 06-Feb-2003 by Murrough Landon