|
|
Database Workshop on 22-23 May 2006
Present: Bruce, Damien, Florian, Kambiz, Murrough,
Norman, Richard, Steve, Victor.
Overview
The workshop ran for a day and a half. We started with some
presentations to raise discussion topics. These are in the CERN
agenda system.
At the end of our internal discussions we had further discussion
with Richard Hawkings, followed by a phone conversation with Rainer
who was unable to be present in person.
What follows are some very abbreviated notes, mainly from the first
morning. The next section has a summary of our workplan as transcribed
from the final state of the whiteboard at the end of the meeting.
Norman
Norman reminded us of the DB context diagram (in SASD notation) developed
by him and Richard. There were a few comments:
- The "current settings" should also contain operation parameters,
chosen by humans, in addition to those from calibration procedures.
- As well as "connectivity", there is also the hardware configuration
(ie crates and modules).
- The diagram should show logging of used settings to the conditions
database, ie an audit trail.
- Another area is that of "run plans" and "run types". NB a run plan
may involve extraction of new settings, like the precalib settings
used for the DAC scans of the calorimeter cable tests.
Murrough
Comments on Murroughs talk included:
- We need to ensure we give our input into discussions of connectivity,
eg that starting in the DAQ group. Also our requirements of the TC
installation DBs
- We should find out more about cabling databases or mappings used in
offline code by various detectors.
- The connectivity should include the eta and phi mapping for use, most
immediately, in cable test software.
Florian
Florian gave a summary of how his PpmCal structure for PPM calibration
and other settings are organised. He also described the work he has
been doing on implementation "run plans" in C++ libraries rather than
the existing shell scripts He answered lots of questions but I failed
to take notes.
Demonstrations
Richard gave us a demonstration of using COOL_IO to load and browse
data in COOL tables. His talk gives the instructions for doing this
at home! He has still had no success with KTIDBExplorer.
Norman also demonstrated his connectivity database. A slide from his
earlier talk showed the normalised database schema he is using.
At present his work covers the analogue cables only, while what we have
added to the OKS database covers only the digital cables. Subsequently
Norman also suggested a way in which we could have our own tables
but use whatever was in the TC cabling database if it was present
and up to date.
Workplan
Area | Solution | Work to be done | Consult offline | When |
Trigger menu | New DB from Johannes | Code to fill our objects | y | Autumn |
Calibration settings | COOL (link to justifications in ROOT/CORAL) |
Complete prototype: load modules from COOL. Tools to fill/browse | y | Now |
Operational settings | COOL | Ditto | y | July |
Hardware configuration | OKS cache, load from COOL |
Confirm tdaq-01-06-00 OKS archive/restore works | n | Not urgent |
Connectivity | | Talk to Kathy Pommès at al | y | July |
Logging/Conditions DB | oks2cool |
Design, choose info to record. Unique values (not IS/OKS mix) Implies IGUI work | y | Dec? |
Run plans, run types | | Design and think | y | |
Calibration Results | COOL link to ROOT/POOL(?) | | | |
Calib non-event data | | | | |
Module history | Pavels DB/MTF | Review/design -> import to CERN | n | By module manufacture |
Installation DB | MTF | Get CTP group experience | n | |
Discussion with Richard Hawkings
We had a number for questions for Richard:
- Q: Do we have read/write access to the Kathy Pommès DBs and does it have IOVs?
A: Can probably have read access and may be able to negotiate write access.
But DB only has date of last change, not history.
History has been requested, but not provided yet (both cabling DB and rack wizard
are supported by CMS).
Most subdetectors have their own internal detailed cabling descriptions, so for
the moment probably best to do the same.
- Q: Can we use POOL outside of CERN to store links to eg ROOT files?
A: Yes, POOL tokens are generated locally (from timestamp + ethernet MAC address).
However so far POOL has only been used in the ATHENA context...
- Q: What are the rules on benchmarking new DB applications before moving them
to the production oracle servers?
A: Prefer to move step by step from devdb10 to integration server and finally
to production server. Eg test with a years worth of data on integration server.
Two oracle experts in ATLAS (Gansho and Florbella) to help optimise schema and
queries.
- Q: What is the plan for production server accounts?
A: We would apply (to Richard) for an account which owns the schema (and for
the moment we can edit our own schema). We also get a "writer" account, without
schema privileges, to fill existing tables. There would be a general reader
account, shared between schemas. Eg one for all COOL tables. The accounts are
the same on integration and production servers.
- Q: Are there common solutions already in use for some our problems that we dont
know about?
A: Nothing major. For browsing we could also look at the web interface from
Torre (now running at CERN and easier to use than before). Also the python
scripting interface to COOL. Regarding the COOL+CORAL solution being used by
Beniamino, some of the performance related reasons for this approach (but not all)
have been addressed by more recent versions of COOL.
Discussion with Rainer
In a phone discussion with Rainer we reviewed the workplan table above.
Comments in the subsequent discussion included:
- Should the hardware configuration know the module IDs and also IDs
of daughtercards (which perhaps uniquely in our system are available
for all JEM daughtercards)? The feeling was no, that should be part
of the information read from the system at run start and logged in
the conditions database.
- What tools will we have for looking at the conditions DB away from
the online system (and not in ATHENA context either)? Eg for tracking
trends, correlating with hardware changes (as in the previous point).
- Rainer expressed the hope that migrating to the use of COOL would be
"adiabatic". We certainly hope to avoid traumatic changes. As before
when we move to new TDAQ/LCG releases we should try to provide a tar
ball for easy installation at test rigs. In future we may need
test rig sites also to run a local mysql server. An easy setup should
be provided for this new requirement.
Next meeting
The next phone meeting will be on Thursday 8 June at 11:00 CET
(10:00 BST).
Last updated on 26-May-2006
by Murrough Landon
|