What should happen during repack?

Summary

We have a repack going of about 5 TB. But what’s happening seems very strange.

The repack read the tape just fine and place all the contents into an NFS mounted disk area. The archive started and looking at cta-admin dr ls everything looks fine. So far it has written over 3 TB:

G2_LTO8_DEV G2_F8C4R2 tpsrvg2105      Up ArchiveForRepack Transfer 12808 VR7100 test.cta_test_2copy_copy_1 dev_repack  1563 3.3T 255.6   29113        0        -  cephUser  13 -      

and looking at the drive logs all looks normal too:

{"epoch_time":1747853655.086340930,"local_time":"2025-05-21T13:54:15-0500","hostname":"tpsrvg2105","program":"cta-taped","log_level":"INFO","pid":3027568,"tid":3062633,"message":"File successfully read from disk","drive_name":"G2_F8C4R2","instance":"dev","sched_backend":"cephUser","thread":"DiskRead","tapeDrive":"G2_F8C4R2","tapeVid":"VR7100","mountId":"29113","vo":"dev_repack","tapePool":"test.cta_test_2copy_copy_1","threadID":7,"path":"file:////pnfs/Migration/VR1871/000000029","actualURL":"file:////pnfs/Migration/VR1871/000000029","fileId":54454163,"readWriteTime":3.632026,"checksumingTime":0.0,"waitFreeMemoryTime":69.5891090000001,"waitDataTime":0.0,"waitReportingTime":0.0,"checkingErrorTime":0.000555000000000003,"openingTime":0.007903,"transferTime":73.229596,"totalTime":73.229596,"dataVolume":2097152000,"globalPayloadTransferSpeedMBps":28.6380386421905,"diskPerformanceMBps":28.6380386421905,"openRWCloseToTransferTimeRatio":0.04970570915071}
{"epoch_time":1747853655.096964629,"local_time":"2025-05-21T13:54:15-0500","hostname":"tpsrvg2105","program":"cta-taped","log_level":"INFO","pid":3027568,"tid":3062633,"message":"Opened disk file for read","drive_name":"G2_F8C4R2","instance":"dev","sched_backend":"cephUser","thread":"DiskRead","tapeDrive":"G2_F8C4R2","tapeVid":"VR7100","mountId":"29113","vo":"dev_repack","tapePool":"test.cta_test_2copy_copy_1","threadID":7,"path":"file:////pnfs/Migration/VR1871/000000380","actualURL":"file:////pnfs/Migration/VR1871/000000380","fileId":54452244}
{"epoch_time":1747853661.966283774,"local_time":"2025-05-21T13:54:21-0500","hostname":"tpsrvg2105","program":"cta-taped","log_level":"INFO","pid":3027568,"tid":3062637,"message":"File successfully transmitted to drive","drive_name":"G2_F8C4R2","instance":"dev","sched_backend":"cephUser","thread":"TapeWrite","tapeDrive":"G2_F8C4R2","tapeVid":"VR7100","mountId":"29113","vo":"dev_repack","tapePool":"test.cta_test_2copy_copy_1","mediaType":"LTO7M","logicalLibrary":"G2_LTO8_DEV","mountType":"ArchiveForRepack","vendor":"Unknown","capacityInBytes":9000000000000,"fileId":54452357,"fileSize":2097152000,"fSeq":1543,"diskURL":"file:////pnfs/Migration/VR1871/000000368","readWriteTime":5.97889,"checksumingTime":0.901273,"waitDataTime":0.00250699999999999,"waitReportingTime":0.000175,"transferTime":6.882845,"totalTime":6.882817,"dataVolume":2097152000,"headerVolume":480,"driveTransferSpeedMBps":304.693918202387,"payloadTransferSpeedMBps":304.6938484635,"reconciliationTime":1699296224,"LBPMode":"LBP_On"}

with messages like this repeating. We seem to have had several mounts of the tape to write to with these messages:

{"epoch_time":1747835106.071864160,"local_time":"2025-05-21T08:45:06-0500","hostname":"tpsrvg2105","program":"cta-taped","log_level":"INFO","pid":2357472,"tid":2925532,"message":"No more data to write on tape, unconditional flushing to the client","drive_name":"G2_F8C4R2","instance":"dev","sched_backend":"cephUser","thread":"TapeWrite","tapeDrive":"G2_F8C4R2","tapeVid":"VR7100","mountId":"29112","vo":"dev_repack","tapePool":"test.cta_test_2copy_copy_1","mediaT
ype":"LTO7M","logicalLibrary":"G2_LTO8_DEV","mountType":"ArchiveForRepack","vendor":"Unknown","capacityInBytes":9000000000000,"files":2,"bytes":4194304000,"flushTime":1.949508}
{"epoch_time":1747835106.086162223,"local_time":"2025-05-21T08:45:06-0500","hostname":"tpsrvg2105","program":"cta-taped","log_level":"INFO","pid":2357472,"tid":2925532,"message":"Logging mount general statistics","drive_name":"G2_F8
C4R2","instance":"dev","sched_backend":"cephUser","thread":"TapeWrite","tapeDrive":"G2_F8C4R2","tapeVid":"VR7100","mountId":"29112","vo":"dev_repack","tapePool":"test.cta_test_2copy_copy_1","driveManufacturer":"IBM     ","driveType"
:"ULT3580-TD8     ","firmwareVersion":"Q3A0","serialNumber":"0007880A1B","mountTotalNonMediumErrorCounts":0}

Even that doesn’t look bad, but when we look at the output of cta-admin repack ls we see no progress. archivedBytes is 0, failedToArchive files and bytes are non-zero, and there is nothing in destinationInfos. Also listing tapefiles for the destination tape gives nothing.

On thing that is odd, but seems OK to me is that the tape pool of the source tape is not the same as the tape pool of the destination tape. As far as I can tell this is OK because the archive routes for the storage class of the files changed in the interim. But the files are correctly routed according to the SC, the new ARs, and the new TPs.

Any ideas of what might be wrong or what to check? We’re trying this in our dev environment first of course.