Mirrors CERN QMUL |
ATLAS Level 1 Calorimeter Trigger Software | |
L1Calo Software | Minutes: 10 March 2004 | |
![]() |
Software meeting at Heidelberg on 10 March 2004Migration to the new Online/Dataflow releaseThe move to Online release 00-21-00 was implemented recently with many changes to our software in the areas of the database, run control states and IGUI panels. Several newly introduced errors have been fixed, but some still remain. For example Gilles (and also Cano) have found problems with the multistep run sequencer. Murrough expects this is a bug in the new database code. Although our run controllers implement the new run control states, these are not yet propogated to module services. We agreed to do this on Tuesday (16 March), replacing the old "start" transition with "prepareRun" and "startTrigger" and "stop" with "stopTrigger" and "stopFE". (Afterthought: we should also add the checkpoint transition). The other outstanding area is migration to ROD crate DAQ style run controllers. We should find out if this is absolutely necessary for the test beam. It may be needed in the TTC crate to include new CERN modules (LTP, BUSY). Since our migration, an updated online release (00-21-01) was announced together with the new dataflow (DF-00-07-00). These new releases should be minor (except for ROS related issues). We also discussed the new run controller URD on which our feedback is requested. We already proposed two use cases which are included in the document but the suggestion that more global states may be required was not implemented. We should give feedback that hidden states are required (ie "shall" not "may" be provided). We also need to firm up our use cases especially concerning calibration with the calorimeters. VME and HDMC issuesFlorian (with Klaus Schmitt) has been looking at adapting the CERN VME driver for the homebrew CPU. This would then mean that all higher software layers could work without changes. They have identified some places where the source code needs changing and will contact Markus Joos in case of further problems. It is clear that not too much time should be spent pursuing this path. At the last meeting we discussed handling A32 addressing in HDMC. The preferred option is to have an A32 bus class and use either all A32 or all A24 addressing in the crate. The only problem may be handling VME64x configuration space which uses A24 to setup A32 addresses. We also need to check if the TCM with PP crate adapter link card can handle A32 addressing (and also if the new LTP and BUSY module can handle A24 addressing in a TTC crate with an A24 TTCvi). However it emerged that initially the PPM will only use A24. This will be OK until there are more than four PPMs, so we deferred this issue again. Murrough has looked at optimising the performance of HDMC when it reads in parts files for many complex modules in one crate. However the present PartManager architecture requires sorting all parts - clearly an operation which scales polynomially. Fixing this would require major surgery to the PartManager and Part classes, so we will have to live with this until after the test beam. Rationalising "kicker" programsWe have a number of so called "kicker" programs, ie standalone applications invoked by the run control to interact with modules via VME during a run, eg to read/reset spy memories or collect ROD fragments from DSSes etc. Bruce recently made a couple of modifications to the CPROD test kicker program. Firstly to separate off the interaction with database and signal handling from the looping/analysis code. The kicker database handling could go into a separate package that could be use by other kicker programs. It was suggested that common handling run sequences could also go into such a package. The signal handling has been moved into infraL1Calo - however Bruce is not sure that it works properly (or ever did). This needs further investigation, eg if there is a conflict with any signal handling declared by online software. Secondly Bruce implemented a factory class to determine which kicker subclass to use according to run type. However it may be that using the run type objects themselves to specify the kicker program to run may be more flexible (especially if more than one kicker is needed in one crate). Bruce also remarked that he has identified the limitation in the DSS which restricts us to bursts of 16 events and has asked for the FIFO in question to be increased in size. ROS issuesWe have successfully tried the ROS with dataflow versions 5 and 6 with both old Slink interfaces and new HOLA/FILAR interfaces. However up to now the ROS has not provided events to monitoring (at least not in data driven mode). This should now be available in the recently announced DF-00-07-00 release. The new release uses the latest event format. So far only the CPM and neutral ROD firmware supports this. Other variants need updating. We can now try again to use the ROS with the new software and get event fragments via the online monitoring interface. Bruce proposes to use the monitoring interface for access to either ROS fragments or DSS fragments. Database issuesThe PPM needs a lot of configuration/calibration data. Florian has started developing an interface to load this from a hierarchical description in XML. Such an XML file would need to be prepared collecting data from several sources (eg different calibrations). Further discussions on how this could fit into the existing L1Calo database are needed. We have done nothing with the conditions database so far. We certainly suffer from the lack of a general archive and restore facility for the configurations used in previous tests. In the short term we could use scripts to help, but we ought to re-request support for archiving and restoration of configuration databases. AOBWe recently changed how we use TTC commands in order to fit in with DSS limitations and to provide for a global start (or reset) playback command. The TTC/Busy document needs updating. Murrough has started this, but needs to update the list of other TTC commands we really need (if any). The ROD may be a special case as TTC commands are needed for synchronous sampling of ROD fragments across the system. Next meeting(s)To be arranged. Last updated on 15-Mar-2004 by Murrough Landon |