PostgreSQL Schema Upgrade

Hello friends,

I am looking at upgrading CTA from version 4.0-2. I know there are db schema upgrades for 4.1, 4.2, 4.3 etc etc, so I am looking at that first. I have read Upgrade procedure - EOSCTA Docs and the related sections and I wondering if Liquidbase is used for PostgreSQL as well as Oracle or only Oracle?

I also noticed that the git repo only containes liquidbase change files for oracle (the PostgreSQL directory is empty):

catalogue/migrations/liquibase/postgres · master · cta / CTA · GitLab (is empty)

catalogue/migrations/liquibase/oracle · master · cta / CTA · GitLab (is not empty)

What version would you recommend we upgrade to? We run this in production.
Does the upgrade process from 4.0 to 4.3 require a 4.1 schema upgrade, following a 4.1 software upgrade, following a 4.2 schema upgrade, following a 4.2 software upgrade, following a 4.3 schema upgrade, following a 4.4 database upgrade?

Thank you as always :slight_smile:

Warm Regards,

Denis

Hi Denis

In theory Liquibase can also be used to upgrade the CTA schema in Postgres. As you no have no doubt realised, we have not tried it. So I can give you some advice but you will have to experiment and check that it works as expected.

I would advise that you make a clone of your DB and execute the upgrade on the clone. If everything works, you can execute the upgrade on your production DB.

The Oracle Liquibase scripts can be copied and adapted to Postgres. The syntax should be changed to Postgres where necessary—I believe the only change is that VARCHAR2 will need to be replaced with VARCHAR, but please test in case there are other changes needed that I am not aware of. Please submit your upgrade scripts to the repo once you have them working!

Yes, you should run the upgrade of each DB version in turn. You should be able to upgrade to schema v4.6.

The upgrade process is designed to allow upgrading the DB on a live system, but some of the DB upgrades were not completely smooth, so we recommend shutting down the Frontend and tape servers before executing the upgrade if possible.

The CTA software needs to be upgraded after each DB upgrade.

I hope that helps, please come back if you have any more questions or run into a problem.

Cheers,

Michael

Additional notes

There have been quite a few schema changes in the last few versions but this should settle down soon. We are trying to get all necessary DB schema changes in place before the start of Run-3.

The next schema version will be v10.0. We decided to skip a few numbers to remove confusion about whether we are talking about software versions or schema versions.

1 Like

Hi Michael,

Thank you very much, extremely helpful as always :slight_smile:

I will give this a try in our dev environment in the next week or two. Will probably have a few more questions.

Will definitely save any upgrade scripts and upload them once I am confident in them.

Warm Regards,
Denis

Hello Michael,

I have had a good play with the upgrade process. I have the postgres scripts ready to post when my access to gitlab gets granted.

I also have 3 questions so far:

  1. I deployed cta version 4.0-2 in dev. Create fresh 4.0 schema using
cta-catalogue-schema-create /etc/cta/cta-catalogue.conf

Before doing anything else, I ran the schema verify tool, but it fails off the bat:

cta-catalogue-schema-verify /etc/cta/cta-catalogue.conf
Schema version : 4.0
Checking indexes...
  ERROR: INDEX ARCHIVE_FILE_DFI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX ARCHIVE_FILE_DIN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX FILE_RECYCLE_LOG_DFI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_FILE_ARCHIVE_FILE_ID_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_FILE_VID_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_STATE_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_TAPE_POOL_ID_IDX is missing in the catalogue database but is defined in the schema.
  FAILED
Checking tables, columns and constraints...
  SUCCESS
Status of the checking : FAILED

This is not the case in our prod environment (which started with version 3.2x, then upgraded to 4.0-2), so I am not too worried at this stage. Some schema upgrades will clear these errors, but then they come back with further schema upgrades. What is even more strange is the indexes it is complaining about missing, are in fact present in the database. For example, with INDEX ARCHIVE_FILE_DFI_IDX, we can see it is present in the table:

cta=> \d archive_file;
                             Table "cta.archive_file"
       Column        |          Type          | Collation | Nullable |   Default
---------------------+------------------------+-----------+----------+-------------
 archive_file_id     | numeric(20,0)          |           | not null |
 disk_instance_name  | character varying(100) |           | not null |
 disk_file_id        | character varying(100) |           | not null |
 disk_file_uid       | numeric(10,0)          |           | not null |
 disk_file_gid       | numeric(10,0)          |           | not null |
 size_in_bytes       | numeric(20,0)          |           | not null |
 checksum_blob       | bytea                  |           |          |
 checksum_adler32    | numeric(10,0)          |           | not null |
 storage_class_id    | numeric(20,0)          |           | not null |
 creation_time       | numeric(20,0)          |           | not null |
 reconciliation_time | numeric(20,0)          |           | not null |
 is_deleted          | character(1)           |           | not null | '0'::bpchar
 collocation_hint    | character varying(100) |           |          |
Indexes:
    "archive_file_pk" PRIMARY KEY, btree (archive_file_id)
    "archive_file_dfi_idx" btree (disk_file_id)
    "archive_file_din_dfi_un" UNIQUE CONSTRAINT, btree (disk_instance_name, disk_file_id) DEFERRABLE
    "archive_file_din_idx" btree (disk_instance_name)
Check constraints:
    "archive_file_id_bool_ck" CHECK (is_deleted = ANY (ARRAY['0'::bpchar, '1'::bpchar]))
Foreign-key constraints:
    "archive_file_storage_class_fk" FOREIGN KEY (storage_class_id) REFERENCES storage_class(storage_class_id)
Referenced by:
    TABLE "tape_file" CONSTRAINT "tape_file_archive_file_fk" FOREIGN KEY (archive_file_id) REFERENCES archive_file(archive_file_id)

I tried adding the index again, and it fails to add because it already exists.

We are running postgres server v13. Could it be that our cta container have postgres libraries older:

postgresql-libs-9.2.24-7.el7_9.x86_64
  1. I noticed that in Release Notes, the version of eos is updated with some versions of CTA:

cta 4.4.0-1 updates eos used in CI to 4.8.67
cta 4.5.1-2 updates eos used in CI to 4.8.74
cta 4.6.0-1 updates eos used in CI to 4.8.76

We run 4.8.45. Do we need to upgrade eos during each stage of cta upgrade also?

  1. I noticed also in Release Notes:

cta version 4.6.1-1 requires schema 4.6 (this makes perfect sense)
cta version 4.6.0-1 requires schema 4.6 (I start to get confused here)
cta version 4.4.0-1 requires schema 4.3 (Very confused now)
cta version 4.2-1 requires schema 4.2 (this makes sense)
cta version 4.1-1 requires schema 4.1 (makes sense)

Are schema versions not aligned with software versions?

I was able to confirm that cta version 4.4.1-1 does not recognise schema 4.4, but cta version 4.5.x does.
I was also able to confirm that cta version 4.5.2-1 does not recognise schema 4.5.

My plan originally was:

Update schema to 4.1
Update software to 4.1
Verify 4.1 schema
Update schema to 4.2
Update software to 4.2
Verify 4.2 schema
Update schema to 4.3
Update software to 4.3
Verify 4.3 schema
Update schema to 4.4
Update software to 4.4
Verify 4.4 schema (will fail)
Update schema to 4.5
Update software to 4.5
Verify 4.5 schema (will fail)
Update schema to 4.6
Update software to 4.6
Verify 4.6 schema

What can I do better? Can I just skip 4.4 and 4.5 schema verification?

Thank you :slight_smile:

Warm Regards,

Denis

Hi Denis,

If the only errors are extra indexes, it is safe to proceed with the schema upgrade even though the verify failed. The latest version of cta-catalogue-schema-verify does not fail on extra indexes (it just issues a warning).

Schema versions are not aligned with software versions, although at some point in the past they were. We realised that this is confusing, so we skipped from schema version 4.6 to v10.0 to make it more obvious that they are not related.

ReleaseNotes.md specifies which schema version is required for each software version.

We have made signifcant improvements to the upgrade procedure for CTA v4.7/schema v10, which is quite a complicated upgrade.

Let me know if that is enough information or if you have further questions!

Michael

Hi Michael,

Thank you very much. That is a great explanation.

I only have one question before I’m going to proceed to upgrade our prod… Should we also upgrade eos to 4.8.75 when we are on cta version 4.6.3-1?

I upgraded eos to 4.8.75 in dev, and it worked fine. It forced me to set

EOS_HA_REDIRECT_READS=1

but otherwise was okay.

If your advise is to upgrade eos to 4.7.85, then could you please explain that configuration item a bit? I looked at the code and it appears in two places:

mgm/XrdMgmOfsFile.cc
mgm/XrdMgmOfsConfigure.cc

Having a look at mgm/XrdMgmOfsConfigure.cc, it looks like it only affects tape enabled eos instances. I don’t have good knowledge of the codebase to understand the impact of it in mgm/XrdMgmOfsFile.cc unfortunately. I wanted to ensure this is okay to set in our non-tape-enabled eos instances also.

So my upgrade plan currently is:

Upgrade schema to 4.1
Upgrade software to 4.1
Upgrade schema to 4.2
Upgrade software to 4.2 (maybe this is a pointless step?)
Upgrade software to 4.3
Upgrade schema to 4.3
Upgrade software to 4.4
Upgrade schema to 4.4
Upgrade software to 4.5
Upgrade schema to 4.5
Upgrade software to 4.6.0-1
Upgrade eos to 4.8.75 (is this needed?)
Upgrade schema to 4.6
Upgrade software to 4.6.3-1

Warm Regards,

Denis

Sorry Michael,

Please ignore my question about EOS_HA_REDIRECT_READ. I failed to find this topic earlier: MGM seg faults following an EOS upgrade - #13 by georgep - EOS Community

Just looking to confirm whether eos should be upgraded to 4.8.75

Cheers,

Denis

Upgrading EOS is not an essential step in the CTA upgrade. The dependency in CI will tell you which version of EOS we are testing against in CI.

If you run with a different version of EOS, it will probably work, but caveat emptor: we have not tested it.

EOS 4.8.75 fixed some bugs so it’s recommended to do this upgrade anyway.

In any case I would keep the EOS upgrades separate from the CTA upgrades. You can upgrade CTA first and do the EOS upgrade afterwards as a separate step.

You should now be able to upgrade to CTA 4.7.0-1 with schema v10.0.

Thanks very much Michael. You have clarified everything I wanted to know perfectly.
Is CTA 4.7.0-1 stable enough to run in production?

Warm Regards,
Denis

I recommend upgrading to 4.7.3-1 which is what we are currently running in production. It fixes a bug introduced in 4.6.1-1.

Thank you very much, Michael. I actually was doing the upgrade already in prod when you replied, so we will endeavor to move 4.6 → 4.7 as soon as possible.

The upgrade was successful, thank you very much for your guidance on it.

I have the postgres scripts ready to upload, except 4.6to4.10 (for when my access to gitlab is enabled).

Warm Regards,

Denis

Great, good to hear it!

Mwai has pointed out a bug in cta-catalogue-schema-verify which he experienced after upgrading to schema v10, see here: Presentations on CTA Schema Upgrade Tools and Procedures - #2 by Mwai

I would be interested to know if you see the same thing.

Best regards,

Michael

Hello Michael,

I just upgraded our dev environment to 4.6.3-1 to 4.7.3-1 and I got the exact same issue as Mwai, plus a few more validation errors:

Schema version : 10.0
Checking indexes...
  ERROR: INDEX ADMIN_USER_AUN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX ARCHIVE_FILE_DFI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX ARCHIVE_FILE_DIN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX ARCHIVE_FILE_SCI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX DISK_INSTANCE_DIN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX DISK_INSTNCE_SPCE_DISN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX DISK_SYSTEM_DIN_DISN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX DISK_SYSTEM_DSN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX DRIVE_STATE_DN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX FILE_RECYCLE_LOG_DFI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX FILE_RECYCLE_LOG_SCD_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX FILE_RECYCLE_LOG_VID_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX LOGICAL_LIBRARY_LLN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX MEDIA_TYPE_MTN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX MOUNT_POLICY_MPN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX REQ_ACT_MNT_RULE_DIN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX REQ_ACT_MNT_RULE_MPN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX REQ_GRP_MNT_RULE_DIN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX REQ_GRP_MNT_RULE_MPN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX REQ_MNT_RULE_DIN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX REQ_MNT_RULE_MPN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX STORAGE_CLASS_SCN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX STORAGE_CLASS_VOI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_FILE_ARCHIVE_FILE_ID_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_FILE_VID_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_LLI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_MTI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_POOL_TPN_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_POOL_VOI_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_STATE_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_TAPE_POOL_ID_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX TAPE_VID_UN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX VIRTUAL_ORG_DIN_IDX is missing in the catalogue database but is defined in the schema.
  ERROR: INDEX VIRTUAL_ORG_VON_UN_IDX is missing in the catalogue database but is defined in the schema.
  FAILED
Checking tables, columns and constraints...
  ERROR: IN TABLE DISK_SYSTEM, CONSTRAINT DISK_SYSTEM_DIN_NN is missing in the schema but defined in the catalogue database.
  ERROR: IN TABLE DISK_SYSTEM, CONSTRAINT DISK_SYSTEM_DISN_NN is missing in the schema but defined in the catalogue database.
  ERROR: IN TABLE VIRTUAL_ORGANIZATION, CONSTRAINT VIRTUAL_ORGANIZATION_DIN_NN is missing in the schema but defined in the catalogue database.
  FAILED
Status of the checking : FAILED
  WARNING: Column archive_file.storage_class_id is part of a foreign key constraint but has no index
  WARNING: Column archive_route.storage_class_id is part of a foreign key constraint but has no index
  WARNING: Column archive_route.tape_pool_id is part of a foreign key constraint but has no index
  WARNING: Column disk_instance_space.disk_instance_name is part of a foreign key constraint but has no index

Looking further at the “Checking indexes…” part, they look to be false positives once again. Those indexes do exist in the database, but the schema verify tool is not picking them up. I suspect that when I do this upgrade in production, these false positives will not present themselves, because this happened many times in the 4.0 → 4.6 upgrade.

Looking at the “Checking tables, columns and constraints…” part, looks identical to Mwai’s. Having checked the database, those constraints are present, but the tool is not seeing them.

Looking at the last part of the verification output, it’s a mixed bag:

WARNING: Column archive_file.storage_class_id is part of a foreign key constraint but has no index - the index is created in the changelog file, but I could not find it in the database for some reason… I will investigate this further.

WARNING: Column archive_route.storage_class_id is part of a foreign key constraint but has no index - There is definitely no index in the database for this. I don’t see this index created anywhere in the code (schema / upgrade scripts).

WARNING: Column archive_route.tape_pool_id is part of a foreign key constraint but has no index - I am not sure what to make of this. These indexes are present in the database:

    "archive_route_pk" PRIMARY KEY, btree (storage_class_id, copy_nb)
    "archive_route_sci_tpi_un" UNIQUE CONSTRAINT, btree (storage_class_id, tape_pool_id)

WARNING: Column disk_instance_space.disk_instance_name is part of a foreign key constraint but has no index - also not present in the database, but I cannot find where it is supposed to be created in the code.

Overall, aside from these verification errors, I ran 4.7.3-1 through a list of tests and they all passed.

Warm Regards,

Denis

Hi Denis,

When we investigated after Mwai’s report, we discovered that depending on how indexes and constraints are created, PostgreSQL can treat them as named or anonymous. The anonymous ones are not picked up by cta-catalogue-schema-verify. We have opened a ticket for this: Fix `cta-catalogue-schema-verify` checking of NOT NULL constraints in Postgres (#1245) · Issues · cta / CTA · GitLab

Cheers

Michael

Hi, Michael,

We’re now trying the upgrading process, and get stuck in the step of updating from cta-4.6.0 to cta-4.6.1.

The cta-catalogue-schema-verify says successes, while the cta-taped fails to start.

$ cta-catalogue-schema-verify /etc/cta/cta-catalogue.conf
Schema version : 4.6
Checking indexes...
  SUCCESS
Checking tables, columns and constraints...
  SUCCESS
Status of the checking : SUCCESS

And part of cta-taped log are

Jun 13 11:02:13 test-tape01.ihep.ac.cn cta-taped[207116]: LVL="CRIT" PID="207116" TID="207116" MSG="In DriveHandler::runChild(): failed to set drive down" SubprocessName="drive:test01" tapeDrive="test01"
Message="createTapeDrive: executeNonQuery failed: executeNonQuery failed for SQL statement INSERT INTO DRIVE_STATE( DRIVE_NAME, HOST, LOGICAL_LIBRARY, SESSION_ID, BYTES...: Executing non query statement: Database library reported: ERROR:  null value in column 'disk_system_name' of relation 'drive_state' violates not-null constraintDETAIL:  Failing row contains (test01, test-tape01, ts2900, null, null, null, null, null, null, null, null, null, null, 1655089333, null, null, null, null, NO_MOUNT, DOWN, 0, 0, null, null, 4.6.3-1, null, null, null, NO_MOUNT, null, null, null, null, /dev/nst0, smc0, null, null, null, NO_USER, test-tape01, 1655089333, NO_USER, test-tape01, 1655089333, null, null, null). (DB Result Status:7 SQLState:23502)" 
Backtrace="/lib64/libctacommon.so.0(cta::exception::Backtrace::Backtrace(bool)+0x69) [0x7f1f01150ff7] /lib64/libctacommon.so.0(cta::exception::Exception::Exception(std::string const&, bool)+0x89) [0x7f1f0115285b] 
/lib64/libctardbmswrapper.so.0(cta::rdbms::wrapper::Postgres::ThrowInfo(pg_conn const*, pg_result const*, std::string const&)+0x4b3) [0x7f1f0145cd5b] 
/lib64/libctardbmswrapper.so.0(cta::rdbms::wrapper::PostgresStmt::throwDB(pg_result const*, std::string const&)+0x4a) [0x7f1f01468b0a] 
/lib64/libctardbmswrapper.so.0(cta::rdbms::wrapper::PostgresStmt::throwDBIfNotStatus(pg_result const*, ExecStatusType, std::string const&)+0x53) [0x7f1f01468b9b] 
/lib64/libctardbmswrapper.so.0(cta::rdbms::wrapper::PostgresStmt::executeNonQuery()+0x24c) [0x7f1f014665ee] /lib64/libctardbms.so.0(cta::rdbms::Stmt::executeNonQuery()+0x50) [0x7f1f016e17aa] 
/lib64/libctacatalogue.so.0(cta::catalogue::RdbmsCatalogue::createTapeDrive(cta::common::dataStructures::TapeDrive const&)+0xe3) [0x7f1f01b4176f] 
/lib64/libctacatalogue.so.0(cta::catalogue::CatalogueRetryWrapper::createTapeDrive(cta::common::dataStructures::TapeDrive const&)::{lambda()#1}::operator()() const+0x4b) [0x7f1f01a48e65] 
/lib64/libctacatalogue.so.0(std::result_of<cta::catalogue::CatalogueRetryWrapper::createTapeDrive(cta::common::dataStructures::TapeDrive const&)::{lambda()#1} ()>::type cta::catalogue::retryOnLostConnection<cta::catalogue::CatalogueRetryWrapper::createTapeDrive(cta::common::dataStructures::TapeDrive const&)::{lambda()#1}>(cta::log::Logger&, std::result_of const&, unsigned int)+0x5c) [0x7f1f01a80a8c] 
/lib64/libctacatalogue.so.0(cta::catalogue::CatalogueRetryWrapper::createTapeDrive(cta::common::dataStructures::TapeDrive const&)+0x4d) [0x7f1f01a48ec9] 
/lib64/libctacatalogue.so.0(cta::TapeDrivesCatalogueState::createTapeDriveStatus(cta::common::dataStructures::DriveInfo const&, cta::common::dataStructures::DesiredDriveState const&, cta::common::dataStructures::MountType const&, cta::common::dataStructures::DriveStatus const&, cta::tape::daemon::TpconfigLine const&, cta::common::dataStructures::SecurityIdentity const&, cta::log::LogContext&)+0x1bf) [0x7f1f01b9dd2b] 
/lib64/libctascheduler.so.0(cta::Scheduler::createTapeDriveStatus(cta::common::dataStructures::DriveInfo const&, cta::common::dataStructures::DesiredDriveState const&, cta::common::dataStructures::MountType const&, cta::common::dataStructures::DriveStatus const&, cta::tape::daemon::TpconfigLine const&, cta::common::dataStructures::SecurityIdentity const&, cta::log::LogContext&)+0x7e) [0x7f1f03c9fd8a] /usr/bin/cta-taped() [0x47c09e] /usr/bin/cta-taped() [0x49ee10]
/usr/bin/cta-taped() [0x49ddc4] /usr/bin/cta-taped() [0x468533] /usr/bin/cta-taped() [0x46814d] 
/usr/bin/cta-taped() [0x467b38] /usr/bin/cta-taped() [0x4559cf] /usr/bin/cta-taped() [0x4560b5] 
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f1efac98555] /usr/bin/cta-taped() [0x455669]"

From the log, cta-taped seems to complain the disk_system_name is null, while we find nowhere to set the drive property. If we drop the constrain, cta-taped can successfully start.

Or we just upgrade to cta-4.7-3.1?

Hey biyujiang,

If it helps, I have run into a similar problem, except upgrading 4.6.3 → 4.7.3. The upgrade requires you to define a disk instance in your CTA configuration. Otherwise this step fails: catalogue/migrations/liquibase/oracle/4.6to10.0.sql · master · cta / CTA · GitLab

This is how I configured mine to make the upgrade work:

cta-admin di add --name cta --comment "Little EOS"
cta-admin vo ch --vo Shared --di cta

–name cta matches the diskInstance that you see when you list files on a tape. For example:

[root@ctafrontend-0 /]# cta-admin --json tf ls -v A00750  | jq .[0]
{
  "af": {
    "archiveId": "258188",
    "storageClass": "single-copy-backup",
    "creationTime": "1636000452",
    "checksum": [
      {
        "type": "ADLER32",
        "value": "be1efa71"
      }
    ],
    "size": "1048660044"
  },
  "df": {
    "diskId": "566039",
    "diskInstance": "cta", <==== here
    "ownerId": {
      "uid": 48,
      "gid": 48
    },
    "path": ""
  },
  "tf": {
    "vid": "A00750",
    "copyNb": 1,
    "blockId": "0",
    "fSeq": "1"
  }
}

I never upgraded from 4.6.0 to 4.6.1. Instead, I upgraded 4.6.0 to 4.6.3, so maybe that is why I did not run into the same problem when starting cta-taped.

We currently run 4.6.3 in production, and we don’t have a defined disk instance, but cta-taped starts:

cta=> select disk_system_name from drive_state;
 disk_system_name
------------------











(11 rows)

Maybe try going 4.6.0 → 4.6.3, or try defining a disk instance.

Hope this helps :slight_smile:

Warm Regards,

Denis

Hi, Denis,

Thanks for your experience. We tried 4.6.0 → 4.6.1 and 4.6.0 → 4.6.3, but both failed. And we did create the disk system and disk instance but didn’t modify the vo. We’ll try later.

From the issues Remove deprecated tape drive tables., it seems ok to remove the constrains.

And, the schema v4.6 and v10.0 do remove the constraints of DISK_SYSTEM_NAME and RESERVED_BYTES in table DRIVE_STATE. So we just remove the two constraints in schema v4.4 and v4.5, and re-compiled CTA. Then everything works well.

Hi biyujiang,

Sorry for late reply. Makes sense, glad you got it working :slight_smile:

Warm Regards,

Denis