I’m trying to deconstruct what EOS and CTA do partly for testing our own file format and partly with an eye towards our eventual migration. To that end I’ve doing the following:
Run prepare_tests to get the tapes inserted and labeled
Make placeholders in EOS with a modified version of eos-import-files
Insert the relevant rows in the CTA database for the files I’m writing
Write a fake tape with MHVTL
I finally got all of that where I think it is right, but when I
“Synchronous workflow failed” suggests an error in queuing the file for retrieve rather than the retrieval operation itself. However, the taped error suggests that a tape was in fact mounted. Are this error and the log message from the same attempt? What messages do you see in the CTA Frontend log?
The other place to look to see why something failed is in the failed requests (cta-admin failedrequests ls) which will show all requests that CTA has given up on (all retry attempts exhausted). cta-admin --json fr ls -l will give the error log messages associated with that request.
The taped error does indeed suggest that the virtual tape does not have a valid header.
Thanks Michael. TL;DR I solved my problem. The “dd” command I was using was putting in a bunch of extra file marks, so no surprise CTA couldn’t read the header. Once I fixed that, all is working and I can recall a file from CTA that CTA never wrote.
To answer your question, though, of what was there. I didn’t get the failed requests as the authentication had expired and I noticed all the extra marks. So I restarted. But in the frontend logs I see this:
I’m glad you figured out how to create a dummy tape and can read it back in CTA.
Those log messages indicate a successful retrieve queueing request, though the filename of the file being recalled is strange /eos/ctaeos/cta/etc/**group**, with asterisks in the filename?
Also the log messages do not correspond to the extended attributes on the file you listed above. That filename is /eos/ctaeos/cta/etc/group (without asterisks in the name) so appears to be not the same disk file. And the timestamp of the queueing request is different. In the file metadata, the queue object was created at 23:07:51, which is earlier than the log mesages at 23:19:19.
Anyway you solved your problem so I guess no need to dig any further.
I suspect the **group** is just left over cruft from the colorizing grep I used. I’ll make sure in the future, but I didn’t notice this in my terminal.