Two copy archival reporting intermittently failing with "Failed to find archive file entry in the catalogue."

We are seeing a very slow rate of files to tape despite files being read from EOS and written to tape. The issue seems to be due to the following error message:

{"epoch_time":1764190023.326574545,"local_time":"2025-11-26T20:47:03+0000","hostname":"getafix-ts20","program":"cta-taped","log_level":"ERROR","pid":833340,"tid":843106,"message":"In ArchiveMount::reportJobsBatchTransferred(): got an exception","drive_name":"obelix_lto9_22","instance":"antares","sched_backend":"cephUser","thread":"MainThread","tapeDrive":"obelix_lto9_22","mountId":"3282291","vo":"storaged-ceda","tapePool":"offsite_lto8","successfulBatchSize":45,"exceptionMessageValue":"filesWrittenToTape: Failed to find archive file entry in the catalogue: archiveFileId=4383011486, diskInstanceName=eosantaresfac, diskFileId=210594737"}

We have confirmed files are being read from EOS successfully, and are being written to tape successfully, but the failure of the reporting causes the files to not be registered in the catalogue

We are seeing this only for our facilities VO that has two tape copies for their data. We are seeing this error reported for both the first and second tape copy sessions. Files with a single tape copy for other VOs are going to tape as expected, at the expected rate, without any indication of this error.

We’re dealing with a fairly large backlog of facilities data to ingest at the moment (~100TB, ~10k files on the buffer), although we are seeing a low rate of these errors (a few an hour), it seems to be enough to cause the rate to tape to be effectively zero and for the buffer to fill up. I assume this is because the tape sessions are long and the batches contain many files. We continue to see single copy tape files going to tape during this time without issue.

We are seeing some of the dual copy files going to tape and being registered in the catalog, but it’s at a rate of ~100MB/s, when the tape servers are reading files off the buffer at ~10GB/s.

I note that in In ArchiveMount::reportJobsBatchTransferred(): got an exception - #3 by poliverc , there was a mention of issues with dual copy storage classes and ObjectStore locking timeouts. I wonder if we’re running into something similar?

I though the issue was that the 2nd copy report needed the 1st copy to have already been recorded in the catalog, but we are also seeing exceptions when reporting 1st copy mounts.

Details

CTA version: 5.11.10.0-1
Operating System and version: Rocky 9.6
Xrootd version: 5.8.0-2
Objectstore backend: 17.2.8-2

After a bit more digging, we’ve noted a change in behavior of the cta-taped after they encounter the reporting exception above.

After the In ArchiveMount::reportJobsBatchTransferred(): got an exception occurs for a mount, there are no further attempts to reportJobsBatchTransferred for subsequent writes to tape. We see more files being transmitted to drive, and the Normal flush because thresholds was reached message at the appropriate intervals, but no subsequent In cta::ArchiveMount::reportJobsBatchTransferred(): archive job successful messages occur.

This explains why this is quite so disruptive. One error can cause an entire tape session (and possibly the cta-taped) to become dark data.

You can see an example log of this happening here: https://s3.echo.stfc.ac.uk/tom/cta-taped-obelix_ts1160_00.log, with the reporting exception happening at 2026-01-12T12:41:24+0000.

There are two questions here:

  1. Why are we getting the archive file ID not found exceptions
  2. Why do they cause the entire mount to stop reporting until a restart

We’re working on mitigating the impact of these occurrences via log monitoring and restarts, but if anyone has any thoughts about what might be going on here - your input would be appreciated!

Hello Tom,

I took a look into the code, and from what I see, the exception message Failed to find archive file entry in the catalogue should only be thrown when CTA was not able to fing the archiveFileId in the catalogue.

However, the exact same function that is throwing this error should have just filled the archiveFileId entry in the catalogue here. For some reason this failed:

Therefore, one possible explanation is that CTA was unable to update the ARCHIVE_FILE table but failed silently…

I have a few ideas about what may be causing this problem, but I need more information:

  • Are you using a Postgres or Oracle backend?
  • In the table ARCHIVE_FILE, can you please check if there is already an entry with DISK_FILE_ID= 210594737 and DISK_INSTANCE_NAME=eosantaresfac?

If you have any other relevant logs reporting errors, please let me know.

Best,
Joao

HI Joao,

We are using an Oracle backend.

There is an entry in the ARCHIVE_FILE table with DISK_FILE_ID= 210594737 and DISK_INSTANCE_NAME=eosantaresfac

Best,

George

Hi Joao,

Really sorry for the pressure but at the moment this has quite an operational impact as we keep almost all of our CTA drives down. I am not sure if you have figured out what is happening here and what we can do but would be easier to literally blow up queues for the affected pools and requeue all the files by sending again the CLOSEW signal.

I have tested the following procedure on our preprod instance and it looks like it works

  1. Extract request object ids from ArchiveQueueShard object
  2. From every request object extract file paths
  3. Delete all request objects
  4. Delete ArchiveQueueShard and ArchiveQueueToTransferForUser objects
  5. Reset sys.cta.archive.objectstore.id attribute to “” for all extracted files
  6. Delete ArchiveQueueShard and ArchiveQueueToTransferForUser objects
  7. Re-submit CLOSEW for extracted file paths

Best,

George

Hi George,

Sorry, lets try to figure this out.

I think the reason why the requests are failing is related to your previous answer.

The ARCHIVE_FILE table has a unique constraint on (DISK_INSTANCE_NAME, DISK_FILE_ID). If we try to register a new Archive File ID with both a Disk File ID and Disk Instance Name that already exist, it will fail to insert the new file in the catalogue (it should have logged an error, but there seems to be a bug that causes this error message not to be printed).

If you want to archive a new file, then you should also assign it a new unique Disk File ID to avoid this clash.

If this is indeed your case, is there a reason why you are trying to resubmit a new request with a pre-existing Disk File ID?


If your original problem was that you were trying to create a 2nd copy (but there is already a 1st one in another tape), then you should follow a different procedure:

  1. cta-admin tape ls --missingfilecopies: To show which tapes contain dual-copy files that are missing the 2nd copy.
  2. cta-admin repack add --justaddcopies: To request CTA to create the missing copies for a VID.

Was this your objective, or did you want to achieve something different?

This procedure can work for some sheduling problems, but I don’t think it will solve the problem that I mentioned above if the (DISK_INSTANCE_NAME, DISK_FILE_ID) pairs are already in the catalogue.

Hi George,

I guess ALICE archives are adding pressure on your side these days: you could temporarily turn facility related VOs to 0 max write drives so that this issue stop putting your drives down.

You could also starve the affected facility target tape pools by disabling writable tapes and making sure that no new tape is allocated there.

This would allow physics archival to go through while you are working on fixing this facility issue.

Regards,

Julien

Also, I see that this problem has some similarities to this other discussion, so I wonder if they are related?

In this other case I think the problem was that there were 2 tape copies with the same Archive File ID being written to the same tape file. Usually each copy should end in a different tape pool, which meand they would never end up in the same tape.

Best,
Joao

Hi Julien, the alice data is going into a different library and is a single copy, so not bothering us so far.

We have isolated the affected VO by creating a virtual library for everyone else to use.

Just need to work out why we have the overlapping id’s

Tim

Hi all,

Thanks for the pointers everyone, I think we have a much better idea of what is going wrong, even if we don’t quite understand why, yet…

We’re working on identifying where clashes exist between the diskfileID/diskinstance pairs in the catalog and in the queued archive jobs, so we can hopefully unstick the non-problematic archival jobs.

As George and Tim says, this isn’t affecting the WLCG archival due to separate buffers and separate drives, which is a blessing.

Thanks for you help so far.

Tom

Hello,

We ran a consistency check on all the queued files against the CTA DB: for all queued files with a disk file id matching on the DB there was not any file that had a matching archive file id in the catalogue. For reference, this what we ran agains all files extracted from existing request objects

while read line; do afi=$(echo "$line" | jq -r '.archivefileid'); dfi=$(echo "$line" | jq -r '.diskfileid'); di=$( echo "$line" | jq -r '.diskinstance'); tfs=$(cta-admin --json tf ls --dfid $dfi -i $di) ; tfc=$(echo $tfs | jq '. | length'); echo "investigating file queued for archive with afid $afi, dfid $dfi, di $di"; echo "  disk file has $tfc tape file entries"; if [[ $tfc -gt 0 ]]; then afic=$(echo $tfs | jq -r '.[] | .af.archiveId' | uniq); if [[ $afi -ne $afic ]]; then echo "  == file ($dfi, $di) has NON MATCHING afid in queue and catalog - $afi $afic"; else echo "  == file ($dfi, $di) has matching afid in queue and catalog - $afi $afic"; fi; fi ; done < ArchiveRequestContents > queue_catalog_consistency_check_1

Unfortunatelly, we still see the above exception in the cta-taped logs. As a mitigation, we deployed a script that restarts cta-taped every time the exception has been thrown since the last time cta-taped had restarted. This works to an extent - we managed to get some of the ingest backlog processed - but we are running into other sorts of problems: because of the frequent cta-taped restarts and tape unmounts (the latter done automatocally following restart), CTA is trying to mount tapes that have not been released from drives and drives switch to the CleanUp state because of volume in use

We reduced taped WatchdogUnmountMaxSecs from 1200s to 600s but this doesnt seem to be of big help.

Do you have any suggestions?

Thanks,

George

Just to add the the exception message

"filesWrittenToTape: Failed to find archive file entry in the catalogue: archiveFileId=4384823435, diskInstanceName=eosantaresfac, diskFileId=211134532"

does not point to particular archive file id but to many various ones….

Hi George,

I think this is indeed the problem:

We ran a consistency check on all the queued files against the CTA DB: for all queued files with a disk file id matching on the DB there was not any file that had a matching archive file id in the catalogue. For reference, this what we ran agains all files extracted from existing request objects

If an archive request: (1) refers to a Disk File ID and Disk Instance Name that already exist in the ARCHIVE_FILE table; but (2) contains a different Archive File ID from this row, then the request will always fail (hence this exception).

This is enforced by an SQL schema constraint.

For it not to throw an exception, you either need to: (1) use a new Disk File ID; or (2) use the same Archive File ID that you already have for that file in the CTA Catalogue; or (3) delete this Disk File ID entry from the CTA Catalogue and retry.


However, if that Disk File ID is in the CTA Catalogue, then it means that that file has already been written to tape. If you have a file with the same Disk File ID in EOS, then you are probably just have incomplete information in the EOS namespace.

As a first approach, you should try to understand where this mismatch comes from and reconcile this information.

If the file already exists, trying to write it again to tape will simply result in wasted tape space.


Could you please check these commands (for the moment checking one file is enough, for example the one with the Disk File ID 210594737) and put the output here?

  • To check the metadata and attributes of this Disk File ID in EOS:
    • eos fileinfo fid:<disk_file_id>
    • eos attr ls fid:<disk_file_id>
  • To check the metadata and attributes of the new file you are trying to archive (if not the same file as above):
    • eos fileinfo <file_path>
    • eos attr ls <file_path>
  • To check which info we currently have in the CTA Catalogue about this file:
    • cta-admin --jsonl tf ls --dfid <disk_file_id> -i <disk_instance_name>
  • To check if there is any entry about this file in the failed requests queue:
    • cta-admin --jsonl fr ls -a -l | grep <disk_file_id>

Best,
Joao

Hi Joao,

Thanks for your comments. I believe that our liner shown above went through the disk file IDs of all queued files and compared their archive file id with the corresponding archive file id that is associated with this particular disk file ID in the the catalogue. No matches - or, rather, mismatches -were found.

The disk file ID you mention looks like it belongs to a file that has been written to tape. So, I ran your suggested commands for the following recent exception

{"epoch_time":1769007405.428978581,"local_time":"2026-01-21T14:56:45+0000","hostname":"getafix-ts04","program":"cta-taped","log_level":"ERROR","pid":1351866,"tid":1352681,"message":"In ArchiveMount::reportJobsBatchTransferred(): got an exception","drive_name":"obelix_ts1160_04","instance":"antares","sched_backend":"cephUser","thread":"MainThread","tapeDrive":"obelix_ts1160_04","mountId":"3389464","vo":"storaged-ceda","tapePool":"ceda_ts1160","successfulBatchSize":3,"exceptionMessageValue":"filesWrittenToTape: Failed to find archive file entry in the catalogue: archiveFileId=4384892571, diskInstanceName=eosantaresfac, diskFileId=211158782"}

[root@cta-adm-fac1 ~]# eos fileinfo fid:211158782
File: ‘/eos/antaresfac/prod/storaged-ceda/badc/container/spot-61520-soc251003/spot-61520-soc251003/118414789’ Flags: 0644 Clock: 188945b721affbf2
Size: 10822189318
Status: healthy
Modify: Sat Jan 10 04:56:21 2026 Timestamp: 1768020981.564971000
Change: Sat Jan 10 04:55:38 2026 Timestamp: 1768020938.580406513
Access: Sat Jan 10 04:55:38 2026 Timestamp: 1768020938.575845752
Birth: Sat Jan 10 04:55:38 2026 Timestamp: 1768020938.575845330
CUid: 601 CGid: 601 Fxid: 0c9606fe Fid: 211158782 Pid: 200096866 Pxid: 0bed3c62
XStype: adler XS: ef 17 36 5a ETAGs: “56682503934574592:ef17365a”
Layout: replica Stripes: 1 Blocksize: 4k LayoutId: 00100012 Redundancy: d1::t0
#Rep: 1
┌───┬──────┬──────────────────────────┬────────────────┬────────────────┬──────────┬──────────────┬────────────┬────────┬────────────────────────┐
│no.│ fs-id│ host│ schedgroup│ path│ boot│ configstatus│ drain│ active│ geotag│
└───┴──────┴──────────────────────────┴────────────────┴────────────────┴──────────┴──────────────┴────────────┴────────┴────────────────────────┘
0 198 antares-eos18.scd.rl.ac.uk default.0 /eos/data-sdq booted rw nodrain online undef


root@cta-adm-fac1 ~]# eos attr ls fid:211158782
sys.archive.file_id=“4384892571”
sys.archive.storage_class=“ceda_ob_2”
sys.cta.archive.objectstore.id=“ArchiveRequest-Frontend-cta-front04.scd.rl.ac.uk-2400-20251104-09:41:41-0-5016581”
sys.eos.btime=“1768020938.575845330”
sys.fs.tracking=“+198”
sys.utrace=“9b7f5592-ede0-11f0-9611-0c42a1f42af0”
sys.vtrace=“[Sat Jan 10 04:55:38 2026] uid:601[storaged_ceda] gid:601[storaged_ceda] tident:storaged.3113317:449@fdsstoraged31.fds.rl.ac.uk name:storaged_ceda dn: prot:sss app: host:fdsstoraged31.fds.rl.ac.uk domain:fds.rl.ac.uk geo: sudo:0 trace: onbehalf:”

[root@cta-adm-fac1 ~]# eos fileinfo /eos/antaresfac/prod/storaged-ceda/badc/container/spot-61520-soc251003/spot-61520-soc251003/118414789
File: ‘/eos/antaresfac/prod/storaged-ceda/badc/container/spot-61520-soc251003/spot-61520-soc251003/118414789’ Flags: 0644
Size: 10822189318
Status: healthy
Modify: Sat Jan 10 04:56:21 2026 Timestamp: 1768020981.564971000
Change: Sat Jan 10 04:55:38 2026 Timestamp: 1768020938.580406513
Access: Sat Jan 10 04:55:38 2026 Timestamp: 1768020938.575845752
Birth: Sat Jan 10 04:55:38 2026 Timestamp: 1768020938.575845330
CUid: 601 CGid: 601 Fxid: 0c9606fe Fid: 211158782 Pid: 200096866 Pxid: 0bed3c62
XStype: adler XS: ef 17 36 5a ETAGs: “56682503934574592:ef17365a”
Layout: replica Stripes: 1 Blocksize: 4k LayoutId: 00100012 Redundancy: d1::t0
#Rep: 1
┌───┬──────┬──────────────────────────┬────────────────┬────────────────┬──────────┬──────────────┬────────────┬────────┬────────────────────────┐
│no.│ fs-id│ host│ schedgroup│ path│ boot│ configstatus│ drain│ active│ geotag│
└───┴──────┴──────────────────────────┴────────────────┴────────────────┴──────────┴──────────────┴────────────┴────────┴────────────────────────┘
0 198 antares-eos18.scd.rl.ac.uk default.0 /eos/data-sdq booted rw nodrain online undef


[root@cta-adm-fac1 ~]#
[root@cta-adm-fac1 ~]# eos attr ls /eos/antaresfac/prod/storaged-ceda/badc/container/spot-61520-soc251003/spot-61520-soc251003/118414789
sys.archive.file_id=“4384892571”
sys.archive.storage_class=“ceda_ob_2”
sys.cta.archive.objectstore.id=“ArchiveRequest-Frontend-cta-front04.scd.rl.ac.uk-2400-20251104-09:41:41-0-5016581”
sys.eos.btime=“1768020938.575845330”
sys.fs.tracking=“+198”
sys.utrace=“9b7f5592-ede0-11f0-9611-0c42a1f42af0”
sys.vtrace=“[Sat Jan 10 04:55:38 2026] uid:601[storaged_ceda] gid:601[storaged_ceda] tident:storaged.3113317:449@fdsstoraged31.fds.rl.ac.uk name:storaged_ceda dn: prot:sss app: host:fdsstoraged31.fds.rl.ac.uk domain:fds.rl.ac.uk geo: sudo:0 trace: onbehalf:”

So the disk file ID 211158782 points to the path /eos/antaresfac/prod/storaged-ceda/badc/container/spot-61520-soc251003/spot-61520-soc251003/118414789 and vice versa

[root@cta-adm-fac1 ~]# cta-admin --json tf ls --dfid 211158782 -i eosantaresfac
[root@cta-adm-fac1 ~]#

No entry in the catologue for this disk file ID (also ran, select * from archive_file where disk_file_id=211158782; but no result came back). I ran this query for a couple of other disk file IDs that threw the same excpetion but no result came back.

[root@cta-adm-fac1 ~]# cta-admin --json fr ls -a -l | grep 211158782
[root@cta-adm-fac1 ~]#

Best,

George

I have asked our DBA team to investigate for errors linked to SQL INSERT transactions.

While the ingest queue was building up we carried out the upgrade of our object store from Pacific to Quincy (08/01 for mgr/mon services and 12/01 for osds). Is it possible at all that some request objects have been somehow corrupted? In which case, the manual deletion of all req objects I suggested might be a think to try

Hi George,

Thank you for the extra info.

Could you please run the cta-admin --json tf ls --id <archive_file_id> for the archive file ID of the file you have in EOS (4384892571) ?

Also, could you run the commands eos --json fileinfo fid:<disk_file_id>
eos --json attr ls fid:<disk_file_id> with the --json option?

Joao

Hi Joao,

I think (99.99%) when I ran yesterday cta-admin tf ls (without –json) for this particular archive file ID, I got the message that it does not exist in the Catalogue (which is also why the above cta-admin --json tf ls --dfid 211158782 -i eosantaresfac did not return anything).

However, now apparently it does exist…!

[root@cta-adm-fac1 ~]# cta-admin --json tf ls --dfid 211158782 -i eosantaresfac
[{"af":{"archiveId":"4384892571","storageClass":"ceda_ob_2","creationTime":"1769047360","checksum":[{"type":"ADLER32","value":"ef17365a"}],"size":"10822189318"},"df":{"diskId":"211158782","diskInstance":"eosantaresfac","ownerId":{"uid":601,"gid":601},"path":""},"tf":{"vid":"JT0333","copyNb":1,"blockId":"33391599","fSeq":"786"},"instanceName":"antares"}][root@cta-adm-fac1 ~]#

and, also,

[root@cta-adm-fac1 ~]# cta-admin --json tf ls --id 4384892571
[{"af":{"archiveId":"4384892571","storageClass":"ceda_ob_2","creationTime":"1769047360","checksum":[{"type":"ADLER32","value":"ef17365a"}],"size":"10822189318"},"df":{"diskId":"211158782","diskInstance":"eosantaresfac","ownerId":{"uid":601,"gid":601},"path":""},"tf":{"vid":"JT0333","copyNb":1,"blockId":"33391599","fSeq":"786"},"instanceName":"antares"}][root@cta-adm-fac1 ~]#

The creation time for this file (assuming that this coincides with the DB update) is indeed a later one2026-01-22 02:02

[root@cta-adm-fac1 ~]# cta-admin tf ls --id 4384892571
archive id copy no    vid fseq block id   disk buffer disk fxid  size checksum type checksum value storage class owner group    creation time instance
4384892571       1 JT0333  786 33391599 eosantaresfac 211158782 10.8G       ADLER32       ef17365a     ceda_ob_2   601   601 2026-01-22 02:02  antares

So this file, did manage to be archived eventually?? I checked a few other archive file ids that threw the exception earlier in the week and most (but not all) now appear in the catalogue (even without always with both tape copies). Not sure what to make out of all this…!

Pasting the output of the commands you requested

[root@cta-adm-fac1 ~]# eos --json fileinfo fid:211158782
{
        "atime" : 1768020938,
        "atime_ns" : 575845752,
        "btime" : 1768020938,
        "btime_ns" : 575845330,
        "checksumtype" : "adler",
        "checksumvalue" : "ef17365a",
        "ctime" : 1768020938,
        "ctime_ns" : 580406513,
        "detached" : false,
        "etag" : "\"56682503934574592:ef17365a\"",
        "fxid" : "0c9606fe",
        "gid" : 601,
        "id" : 211158782,
        "inode" : 56682503934574592,
        "layout" : "replica",
        "locations" :
        [
                {
                        "fsid" : 198,
                        "fstpath" : "/eos/data-sdq/0000527b/0c9606fe",
                        "geotag" : "undef",
                        "host" : "antares-eos18.scd.rl.ac.uk",
                        "mountpoint" : "/eos/data-sdq",
                        "schedgroup" : "default.0",
                        "status" : "booted"
                }
        ],
        "mode" : 420,
        "mtime" : 1768020981,
        "mtime_ns" : 564971000,
        "name" : "118414789",
        "nlink" : 1,
        "nstripes" : 1,
        "path" : "/eos/antaresfac/prod/storaged-ceda/badc/container/spot-61520-soc251003/spot-61520-soc251003/118414789",
        "pid" : 200096866,
        "size" : 10822189318,
        "status" : "healthy",
        "uid" : 601,
        "xattr" :
        {
                "sys.archive.file_id" : "4384892571",
                "sys.archive.storage_class" : "ceda_ob_2",
                "sys.cta.archive.objectstore.id" : "ArchiveRequest-Frontend-cta-front04.scd.rl.ac.uk-2400-20251104-09:41:41-0-5016581",
                "sys.eos.btime" : "1768020938.575845330",
                "sys.fs.tracking" : "+198",
                "sys.utrace" : "9b7f5592-ede0-11f0-9611-0c42a1f42af0",
                "sys.vtrace" : "[Sat Jan 10 04:55:38 2026] uid:601[storaged_ceda] gid:601[storaged_ceda] tident:storaged.3113317:449@fdsstoraged31.fds.rl.ac.uk name:storaged_ceda dn: prot:sss app: host:fdsstoraged31.fds.rl.ac.uk domain:fds.rl.ac.uk geo: sudo:0 trace: onbehalf:"
        }
}
[root@cta-adm-fac1 ~]# eos --json attr ls fid:211158782
{
        "attr" :
        {
                "ls" :
                [
                        {
                                "sys" :
                                {
                                        "archive" :
                                        {
                                                "file_id" : "4384892571"
                                        }
                                }
                        },
                        {
                                "sys" :
                                {
                                        "archive" :
                                        {
                                                "storage_class" : "ceda_ob_2"
                                        }
                                }
                        },
                        {
                                "sys" :
                                {
                                        "cta" :
                                        {
                                                "archive" :
                                                {
                                                        "objectstore" :
                                                        {
                                                                "id" : "ArchiveRequest-Frontend-cta-front04.scd.rl.ac.uk-2400-20251104-09:41:41-0-5016581"
                                                        }
                                                }
                                        }
                                }
                        },
                        {
                                "sys" :
                                {
                                        "eos" :
                                        {
                                                "btime" : "1768020938.575845330"
                                        }
                                }
                        },
                        {
                                "sys" :
                                {
                                        "fs" :
                                        {
                                                "tracking" : "+198"
                                        }
                                }
                        },
                        {
                                "sys" :
                                {
                                        "utrace" : "9b7f5592-ede0-11f0-9611-0c42a1f42af0"
                                }
                        },
                        {
                                "sys" :
                                {
                                        "vtrace" : "[Sat"
                                }
                        }
                ]
        },
        "errormsg" : "",
        "retc" : "0"
}

Hi George,

Great! I’m glad we are seeing some progress, but it seems to me that there are still some issues.

I see in your output of eos --json attr ls fid:211158782 that there is still an attribute sys.cta.archive.archive.id.
In addition, eos --json fileinfo fid:211158782 does not show a copy on tape.

This probably means that the file was successfully archived to tape but not reported back to CTA.

Could you please check the output of cta-admin --jsonl fr ls -a -l | grep 211158782 again? (or without the grep).
This command shows the failed requests, for operator inspection.

If the file is there, then it confirms that it failed to be reported.

Hi Joao,

Thanks for persevering with me! The command cta-admin --json fr ls -a -l | grep 211158782 did not return anything so this file is not (yet?) registered as failed request.

I think the reason why the file with this fid (211158782) does not show a copy on tape in EOS (assumiing you are refering to the locations attribute .fsid == 65535 in the output of eos –json fileinfo) is because only the first of the two tape copies is in the DB

[root@cta-adm-fac1 ~]# cta-admin tf ls --id 4384892571
archive id copy no    vid fseq block id   disk buffer disk fxid  size checksum type checksum value storage class owner group    creation time instance
4384892571       1 JT0333  786 33391599 eosantaresfac 211158782 10.8G       ADLER32       ef17365a     ceda_ob_2   601   601 2026-01-22 02:02  antares

Best,

George