EOS not updated after successfull archival

Hi,

I have a similar problem with Files archived but eoscta disagrees

I am trying to do sth like

eos file workflow /eos/antares/prod/dteam/george/4GB_take2/random_22 default sync::archived

but the MGM shows this error

220412 18:54:32 time=1649786072.605948 func=HandleProtoMethodEvents level=INFO logid=static… unit=mgm@antares-eos01.scd.rl.ac.uk:1094 tid=00007fbad42f3700 source=WFE:1615 tident= sec=(null) uid=99 gid=99 name=- geo="" default SYNC::ARCHIVED /eos/antares/prod/dteam/george/4GB_take2/random_22 cta-front01.scd.rl.ac.uk:10955 fxid=2fba8a9b mgm.reqid=""
220412 18:54:32 time=1649786072.606037 func=HandleProtoMethodArchivedEvent level=ERROR logid=static… unit=mgm@antares-eos01.scd.rl.ac.uk:1094 tid=00007fbad42f3700 source=WFE:2217 tident= sec=(null) uid=99 gid=99 name=- geo="" The opaque data of the archived message for file /eos/antares/prod/dteam/george/4GB_take2/random_22 does not contain cta_archive_file_id. Ignoring request.

Do you how should I “convince” EOS that a file is on tape?

Thanks,

George

Hi George

It looks like you are trying to manually trigger the ARCHIVED workflow, can you explain why you need to do this?

The specific error you are seeing means that the ARCHIVED workflow has been triggered for a file which does not have a CTA archive ID associated with it. The archive ID is assigned during the CREATE event, so if this is missing, it probably means that the file was not archived in CTA.

Michael

Hello Michael,

I am trying to do this because according to EOS the file is still in the default space and not on tape, i.e.

File: ‘/eos/antares/prod/dteam/george/4GB_take2/random_22’ Flags: 0644
Size: 4294967296
Modify: Fri Mar 25 13:44:57 2022 Timestamp: 1648215897.096861000
Change: Fri Mar 25 13:44:48 2022 Timestamp: 1648215888.229050922
Birth: Fri Mar 25 13:44:48 2022 Timestamp: 1648215888.224082132
CUid: 36300 CGid: 24311 Fxid: 2fba8a9b Fid: 800754331 Pid: 800003026 Pxid: 2faf13d2
XStype: adler XS: fc 5a 7e 0d ETAGs: “214950853985959936:fc5a7e0d”
Layout: replica Stripes: 1 Blocksize: 4k LayoutId: 00100012 Redundancy: d1::t0
#Rep: 1
┌───┬──────┬──────────────────────────┬────────────────┬────────────────┬──────────┬──────────────┬────────────┬────────┬────────────────────────┐
│no.│ fs-id│ host│ schedgroup│ path│ boot│ configstatus│ drain│ active│ geotag│
└───┴──────┴──────────────────────────┴────────────────┴────────────────┴──────────┴──────────────┴────────────┴────────┴────────────────────────┘
0 28 antares-eos02.scd.rl.ac.uk default.0 /eos/data-sdn booted rw nodrain online undef


In the MGM log, I can see the SYNC::CREATE and SYNC::CLOSEW workflow requets to the Frontend but not the SYNC::ARCHIVED event/request.

The file does have a tape file entry

archive id copy no vid fseq block id instance disk fxid size checksum type checksum value storage class owner group creation time path
4295711906 1 CL0037 1421 16247587 eosantares 2fba8a9b 4.3G ADLER32 fc5a7e0d dteam 36300 24311 2022-03-25 14:09 -

but for some reason it is also included in the list of failed requests (cta-admin fr ls -a -t dteam).

This file is test dteam file but I have seen with some CMS tape challenge files as well.

George

What is the output of eos attr ls on this file?

eos attr ls /eos/antares/prod/dteam/george/4GB_take2/random_22
sys.archive.file_id=“4295711906”
sys.archive.storage_class=“dteam”
sys.cta.archive.objectstore.id=“ArchiveRequest-Frontend-cta-front01.scd.rl.ac.uk-3683-20220316-09:54:21-0-857166”
sys.eos.btime=“1648215888.224082132”
sys.utrace=“bccfc8c4-ac41-11ec-a62e-1c34da4b345c”
sys.vtrace="[Fri Mar 25 13:44:48 2022] uid:36300[dteam001] gid:24311[dteam] tident:georgep.730878:2910@lcgui06.gridpp.rl.ac.uk name:dteam001 dn: prot:gsi host:lcgui06.gridpp.rl.ac.uk domain:gridpp.rl.ac.uk geo: sudo:0"

Similar for the CMS file

OK, so the archive request was created in CTA. If you say the file is on tape then presumably there was a reporting failure.

Does the object ArchiveRequest-Frontend-cta-front01.scd.rl.ac.uk-3683-20220316-09:54:21-0-857166 still exist in the objectstore? Can you check what queue it belongs to? (Is it in failed requests for example?) Can you dump its contents?

Yes, the object is still in the object store (the request is in the Archive queue)

[root@cta-adm georgep]# cta-objectstore-dump-object ArchiveRequest-Frontend-cta-front01.scd.rl.ac.uk-3683-20220316-09:54:21-0-857166
Object store path: rados://antares@cta:antares
Object name: ArchiveRequest-Frontend-cta-front01.scd.rl.ac.uk-3683-20220316-09:54:21-0-857166
Header dump:
{
“type”: “ArchiveRequest_t”,
“version”: “0”,
“owner”: “”,
“backupowner”: “”,
“payload”: “8LEEormtgBCasgSmAaKxBBFkdGVhbV9tb3VudHBvbGljeaixBAGwsQTYBLixBAHAsQQF0rEEMbKvBAljdGEtYWRtaW6CsAQYY3RhLWZyb250MDEuc2NkLnJsLmFjLnVr0LAEl4H+kAbasQQxsq8ECWN0YS1hZG1pboKwBBhjdGEtZnJvbnQwMS5zY2QucmwuYWMudWvQsASkiu2RBuKxBBJkdGVhbSBtb3VudCBwb2xpY3maswQKCggIARIEDX5a/LizBICAgICAgICAgAHIswQAgrUEQsCtBMybAuitBPe9AeKuBDIvZW9zL2FudGFyZXMvcHJvZC9kdGVhbS9nZW9yZ2UvNEdCX3Rha2UyL3JhbmRvbV8yMtK1BAk4MDA3NTQzMzH6tQQKZW9zYW50YXJlc4q2BOABZW9zUXVlcnk6Ly9hbnRhcmVzLWVvczAxLnNjZC5ybC5hYy51azoxMDk0Ly9lb3Mvd2ZlL3Bhc3N3ZD9tZ20ucGNtZD1ldmVudCZtZ20uZmlkPTJmYmE4YTliJm1nbS5sb2dpZD1jdGEmbWdtLmV2ZW50PXN5bmM6OmFyY2hpdmVkJm1nbS53b3JrZmxvdz1kZWZhdWx0Jm1nbS5wYXRoPS9kdW1teV9wYXRoJm1nbS5ydWlkPTAmbWdtLnJnaWQ9MCZjdGFfYXJjaGl2ZV9maWxlX2lkPTQyOTU3MTE5MDaStgTyAWVvc1F1ZXJ5Oi8vYW50YXJlcy1lb3MwMS5zY2QucmwuYWMudWs6MTA5NC8vZW9zL3dmZS9wYXNzd2Q/bWdtLnBjbWQ9ZXZlbnQmbWdtLmZpZD0yZmJhOGE5YiZtZ20ubG9naWQ9Y3RhJm1nbS5ldmVudD1zeW5jOjphcmNoaXZlX2ZhaWxlZCZtZ20ud29ya2Zsb3c9ZGVmYXVsdCZtZ20ucGF0aD0vZHVtbXlfcGF0aCZtZ20ucnVpZD0wJm1nbS5yZ2lkPTAmY3RhX2FyY2hpdmVfZmlsZV9pZD00Mjk1NzExOTA2Jm1nbS5lcnJtc2c9oLYEgICAgBDytgQRUghkdGVhbTAwMVoFZHRlYW3CtwRvcm9vdDovL2FudGFyZXMtZW9zMDEuc2NkLnJsLmFjLnVrOjEwOTQvL2Vvcy9hbnRhcmVzL3Byb2QvZHRlYW0vZ2VvcmdlLzRHQl90YWtlMi9yYW5kb21fMjI/ZW9zLmxmbj1meGlkOjJmYmE4YTlikrgEBWR0ZWFtmrgENLKvBAplb3NhbnRhcmVzgrAEGmFudGFyZXMtZW9zMDIuc2NkLnJsLmFjLnVr0LAE2Y73kQaiuATxA4CTAgGKkwIFZHRlYW2SkwIAmpMCV0FyY2hpdmVRdWV1ZUZhaWxlZC1kdGVhbS1NYWludGVuYW5jZS1jdGEtdHMxNy5zY2QucmwuYWMudWstOTU2Ny0yMDIyMDMwMi0xMDowODowMy0wLTE1MKCTAuYHqJMCALCTAgC4kwIAwJMCAsiTAgLYkwIC4JMCAuqTAq0BTWFyIDI1IDE0OjA5OjUzLjM4Mjk1NyBjdGEtdHMxOCBJbiBFT1NSZXBvcnRlcjo6QXN5bmNRdWVyeUhhbmRsZXI6OkhhbmRsZVJlc3BvbnNlKCk6IGZhaWxlZCB0byBYcmRDbDo6RmlsZVN5c3RlbTo6UXVlcnkoKSBbRVJST1JdIE9wZXJhdGlvbiBleHBpcmVkIGNvZGU6MjA2IGVyck5vOjAgc3RhdHVzOjHqkwKtAU1hciAyNSAxNDoxMDowOC4zNDQ0MjEgY3RhLXRzMTggSW4gRU9TUmVwb3J0ZXI6OkFzeW5jUXVlcnlIYW5kbGVyOjpIYW5kbGVSZXNwb25zZSgpOiBmYWlsZWQgdG8gWHJkQ2w6OkZpbGVTeXN0ZW06OlF1ZXJ5KCkgW0VSUk9SXSBPcGVyYXRpb24gZXhwaXJlZCBjb2RlOjIwNiBlcnJObzowIHN0YXR1czoxqLgEALi4BAA=”
}
Body dump:
{
“archivefileid”: “4295711906”,
“mountpolicy”: {
“name”: “dteam_mountpolicy”,
“archivepriority”: “1”,
“archiveminrequestage”: “600”,
“retrievepriority”: “1”,
“retieveminrequestage”: “5”,
“creationlog”: {
“username”: “cta-admin”,
“host”: “cta-front01.scd.rl.ac.uk”,
“time”: “1646231703”
},
“lastmodificationlog”: {
“username”: “cta-admin”,
“host”: “cta-front01.scd.rl.ac.uk”,
“time”: “1648051492”
},
“comment”: “dteam mount policy”
},
“checksumblob”: “CggIARIEDX5a/A==”,
“creationtime”: “9223372036854775808”,
“reconcilationtime”: “0”,
“diskfileinfo”: {
“ownerUid”: 36300,
“gid”: 24311,
“path”: “/eos/antares/prod/dteam/george/4GB_take2/random_22”
},
“diskfileid”: “800754331”,
“diskinstance”: “eosantares”,
“archivereporturl”: “eosQuery://antares-eos01.scd.rl.ac.uk:1094//eos/wfe/passwd?mgm.pcmd=event&mgm.fid=2fba8a9b&mgm.logid=cta&mgm.event=sync::archived&mgm.workflow=default&mgm.path=/dummy_path&mgm.ruid=0&mgm.rgid=0&cta_archive_file_id=4295711906”,
“archiveerrorreporturl”: “eosQuery://antares-eos01.scd.rl.ac.uk:1094//eos/wfe/passwd?mgm.pcmd=event&mgm.fid=2fba8a9b&mgm.logid=cta&mgm.event=sync::archive_failed&mgm.workflow=default&mgm.path=/dummy_path&mgm.ruid=0&mgm.rgid=0&cta_archive_file_id=4295711906&mgm.errmsg=”,
“filesize”: “4294967296”,
“requester”: {
“name”: “dteam001”,
“group”: “dteam”
},
“srcurl”: “root://antares-eos01.scd.rl.ac.uk:1094//eos/antares/prod/dteam/george/4GB_take2/random_22?eos.lfn=fxid:2fba8a9b”,
“storageclass”: “dteam”,
“creationlog”: {
“username”: “eosantares”,
“host”: “antares-eos02.scd.rl.ac.uk”,
“time”: “1648215897”
},
“jobs”: [
{
“copynb”: 1,
“tapepool”: “dteam”,
“archivequeueaddress”: “”,
“owner”: “ArchiveQueueFailed-dteam-Maintenance-cta-ts17.scd.rl.ac.uk-9567-20220302-10:08:03-0-150”,
“status”: “AJS_Failed”,
“totalretries”: 0,
“retrieswithinmount”: 0,
“lastmountwithfailure”: “0”,
“maxtotalretries”: 2,
“maxretrieswithinmount”: 2,
“failurelogs”: [],
“maxreportretries”: 2,
“totalreportretries”: 2,
“reportfailurelogs”: [
“Mar 25 14:09:53.382957 cta-ts18 In EOSReporter::AsyncQueryHandler::HandleResponse(): failed to XrdCl::FileSystem::Query() [ERROR] Operation expired code:206 errNo:0 status:1”,
“Mar 25 14:10:08.344421 cta-ts18 In EOSReporter::AsyncQueryHandler::HandleResponse(): failed to XrdCl::FileSystem::Query() [ERROR] Operation expired code:206 errNo:0 status:1”
]
}
],
“reportdecided”: false,
“isrepack”: false
}

Similary with the CMS file

[root@cta-adm georgep]# cta-objectstore-dump-object ArchiveRequest-Frontend-cta-front01.scd.rl.ac.uk-3683-20220316-09:54:21-0-733052
Object store path: rados://antares@cta:antares
Object name: ArchiveRequest-Frontend-cta-front01.scd.rl.ac.uk-3683-20220316-09:54:21-0-733052
Header dump:
{
“type”: “ArchiveRequest_t”,
“version”: “0”,
“owner”: “”,
“backupowner”: “”,
“payload”: “8LEEzqqqgBCasgSrAaKxBA9jbXNfbW91bnRwb2xpY3mosQQBsLEE2AS4sQQBwLEE2ATSsQQxsq8ECWN0YS1hZG1pboKwBBhjdGEtZnJvbnQwMS5zY2QucmwuYWMudWvQsAThjISRBtqxBDGyrwQJY3RhLWFkbWlugrAEGGN0YS1mcm9udDAxLnNjZC5ybC5hYy51a9CwBNariJEG4rEEGENNUyBwcmVwcm9kIG1vdW50IHBvbGljeZqzBAoKCAgBEgS1Ox7wuLMEgICAgICAgICAAcizBACCtQSJAcCtBK6fAuitBIe+AeKuBHkvZW9zL2FudGFyZXMvcHJvZC9jbXMvc3RvcmUvZGF0YS9NYXJjaFRhcGVUZXN0MjAyMi9EMDUvUkFXL3Y0LzAwMC8zNDcvOTk4LzAwMDAyL2IwMzIzMWIzLTU0YWUtNDAyMi05MDgwLWNjOTY0OTU3NjE5NC5yb2900rUECTgwMDcwMjc4M/q1BAplb3NhbnRhcmVzirYE4AFlb3NRdWVyeTovL2FudGFyZXMtZW9zMDEuc2NkLnJsLmFjLnVrOjEwOTQvL2Vvcy93ZmUvcGFzc3dkP21nbS5wY21kPWV2ZW50Jm1nbS5maWQ9MmZiOWMxM2YmbWdtLmxvZ2lkPWN0YSZtZ20uZXZlbnQ9c3luYzo6YXJjaGl2ZWQmbWdtLndvcmtmbG93PWRlZmF1bHQmbWdtLnBhdGg9L2R1bW15X3BhdGgmbWdtLnJ1aWQ9MCZtZ20ucmdpZD0wJmN0YV9hcmNoaXZlX2ZpbGVfaWQ9NDI5NTY2MDg3OJK2BPIBZW9zUXVlcnk6Ly9hbnRhcmVzLWVvczAxLnNjZC5ybC5hYy51azoxMDk0Ly9lb3Mvd2ZlL3Bhc3N3ZD9tZ20ucGNtZD1ldmVudCZtZ20uZmlkPTJmYjljMTNmJm1nbS5sb2dpZD1jdGEmbWdtLmV2ZW50PXN5bmM6OmFyY2hpdmVfZmFpbGVkJm1nbS53b3JrZmxvdz1kZWZhdWx0Jm1nbS5wYXRoPS9kdW1teV9wYXRoJm1nbS5ydWlkPTAmbWdtLnJnaWQ9MCZjdGFfYXJjaGl2ZV9maWxlX2lkPTQyOTU2NjA4NzgmbWdtLmVycm1zZz2gtgSy3ODnJvK2BBJSB3BjbXMwMDFaB3Byb2RjbXPCtwS2AXJvb3Q6Ly9hbnRhcmVzLWVvczAxLnNjZC5ybC5hYy51azoxMDk0Ly9lb3MvYW50YXJlcy9wcm9kL2Ntcy9zdG9yZS9kYXRhL01hcmNoVGFwZVRlc3QyMDIyL0QwNS9SQVcvdjQvMDAwLzM0Ny85OTgvMDAwMDIvYjAzMjMxYjMtNTRhZS00MDIyLTkwODAtY2M5NjQ5NTc2MTk0LnJvb3Q/ZW9zLmxmbj1meGlkOjJmYjljMTNmkrgECGNtc2NoYWxsmrgENLKvBAplb3NhbnRhcmVzgrAEGmFudGFyZXMtZW9zMDEuc2NkLnJsLmFjLnVr0LAEiI7zkQaiuAT5A4CTAgGKkwIIY21zY2hhbGySkwIAmpMCXEFyY2hpdmVRdWV1ZUZhaWxlZC1jbXNjaGFsbC1NYWludGVuYW5jZS1jdGEtdHMwMy5zY2QucmwuYWMudWstNTM2NzQtMjAyMjAzMDItMDk6NTQ6MDAtMC04MTY4oJMC5geokwIAsJMCALiTAgDAkwICyJMCAtiTAgLgkwIC6pMCrQFNYXIgMjQgMTk6NDI6MjcuNzE4NTA0IGN0YS10czA5IEluIEVPU1JlcG9ydGVyOjpBc3luY1F1ZXJ5SGFuZGxlcjo6SGFuZGxlUmVzcG9uc2UoKTogZmFpbGVkIHRvIFhyZENsOjpGaWxlU3lzdGVtOjpRdWVyeSgpIFtFUlJPUl0gT3BlcmF0aW9uIGV4cGlyZWQgY29kZToyMDYgZXJyTm86MCBzdGF0dXM6MeqTAq0BTWFyIDI0IDE5OjQyOjQyLjcyMzU1OCBjdGEtdHMwOSBJbiBFT1NSZXBvcnRlcjo6QXN5bmNRdWVyeUhhbmRsZXI6OkhhbmRsZVJlc3BvbnNlKCk6IGZhaWxlZCB0byBYcmRDbDo6RmlsZVN5c3RlbTo6UXVlcnkoKSBbRVJST1JdIE9wZXJhdGlvbiBleHBpcmVkIGNvZGU6MjA2IGVyck5vOjAgc3RhdHVzOjGouAQAuLgEAA==”
}
Body dump:
{
“archivefileid”: “4295660878”,
“mountpolicy”: {
“name”: “cms_mountpolicy”,
“archivepriority”: “1”,
“archiveminrequestage”: “600”,
“retrievepriority”: “1”,
“retieveminrequestage”: “600”,
“creationlog”: {
“username”: “cta-admin”,
“host”: “cta-front01.scd.rl.ac.uk”,
“time”: “1646331489”
},
“lastmodificationlog”: {
“username”: “cta-admin”,
“host”: “cta-front01.scd.rl.ac.uk”,
“time”: “1646400982”
},
“comment”: “CMS preprod mount policy”
},
“checksumblob”: “CggIARIEtTse8A==”,
“creationtime”: “9223372036854775808”,
“reconcilationtime”: “0”,
“diskfileinfo”: {
“ownerUid”: 36782,
“gid”: 24327,
“path”: “/eos/antares/prod/cms/store/data/MarchTapeTest2022/D05/RAW/v4/000/347/998/00002/b03231b3-54ae-4022-9080-cc9649576194.root”
},
“diskfileid”: “800702783”,
“diskinstance”: “eosantares”,
“archivereporturl”: “eosQuery://antares-eos01.scd.rl.ac.uk:1094//eos/wfe/passwd?mgm.pcmd=event&mgm.fid=2fb9c13f&mgm.logid=cta&mgm.event=sync::archived&mgm.workflow=default&mgm.path=/dummy_path&mgm.ruid=0&mgm.rgid=0&cta_archive_file_id=4295660878”,
“archiveerrorreporturl”: “eosQuery://antares-eos01.scd.rl.ac.uk:1094//eos/wfe/passwd?mgm.pcmd=event&mgm.fid=2fb9c13f&mgm.logid=cta&mgm.event=sync::archive_failed&mgm.workflow=default&mgm.path=/dummy_path&mgm.ruid=0&mgm.rgid=0&cta_archive_file_id=4295660878&mgm.errmsg=”,
“filesize”: “10418138674”,
“requester”: {
“name”: “pcms001”,
“group”: “prodcms”
},
“srcurl”: “root://antares-eos01.scd.rl.ac.uk:1094//eos/antares/prod/cms/store/data/MarchTapeTest2022/D05/RAW/v4/000/347/998/00002/b03231b3-54ae-4022-9080-cc9649576194.root?eos.lfn=fxid:2fb9c13f”,
“storageclass”: “cmschall”,
“creationlog”: {
“username”: “eosantares”,
“host”: “antares-eos01.scd.rl.ac.uk”,
“time”: “1648150280”
},
“jobs”: [
{
“copynb”: 1,
“tapepool”: “cmschall”,
“archivequeueaddress”: “”,
“owner”: “ArchiveQueueFailed-cmschall-Maintenance-cta-ts03.scd.rl.ac.uk-53674-20220302-09:54:00-0-8168”,
“status”: “AJS_Failed”,
“totalretries”: 0,
“retrieswithinmount”: 0,
“lastmountwithfailure”: “0”,
“maxtotalretries”: 2,
“maxretrieswithinmount”: 2,
“failurelogs”: [],
“maxreportretries”: 2,
“totalreportretries”: 2,
“reportfailurelogs”: [
“Mar 24 19:42:27.718504 cta-ts09 In EOSReporter::AsyncQueryHandler::HandleResponse(): failed to XrdCl::FileSystem::Query() [ERROR] Operation expired code:206 errNo:0 status:1”,
“Mar 24 19:42:42.723558 cta-ts09 In EOSReporter::AsyncQueryHandler::HandleResponse(): failed to XrdCl::FileSystem::Query() [ERROR] Operation expired code:206 errNo:0 status:1”
]
}
],
“reportdecided”: false,
“isrepack”: false
}

Both of these jobs are in the failed queue. As I surmised, the job was successful:

“totalretries”: 0,
“retrieswithinmount”: 0,
“lastmountwithfailure”: “0”,

but it failed on the reporting step:

“maxreportretries”: 2,
“totalreportretries”: 2,
“reportfailurelogs”: [
“Mar 24 19:42:27.718504 cta-ts09 In EOSReporter::AsyncQueryHandler::HandleResponse(): failed to XrdCl::FileSystem::Query() [ERROR] Operation expired code:206 errNo:0 status:1”,
“Mar 24 19:42:42.723558 cta-ts09 In EOSReporter::AsyncQueryHandler::HandleResponse(): failed to XrdCl::FileSystem::Query() [ERROR] Operation expired code:206 errNo:0 status:1”
]

Looks like CTA could not contact EOS Reporter. Search your EOS logs around that timestamp to see if you can see the incoming request. If not, perhaps it was a network issue?

We do have a ticket open to fix the problem where reporting fails, then retries immediately and fails again for the same reason. We plan to insert a minimum delay between retries to be more resilient against transient problems like network failures.