I am trying to follow the instructions/steps found here https://gitlab.cern.ch/cta/CTA/-/tree/master/migration so that I can start experimenting with the migration of a CASTOR namespace (on one of our preprod instances) to a test EOS instance running on docker containers on a VM.
The instructions in the above git lab repo mention that all these tools need to be installed and operational on all CASTOR nodes. So, I downloaded the whole dir structure from this repo to a CASTOR stager node, unzipped/untarred it and attempted to install them running
at gRPC/CMakeLists.txt:19 (find_package):
By not providing “Findxrootdclient.cmake” in CMAKE_MODULE_PATH this project
has asked CMake to find a package configuration file provided by
“xrootdclient”, but CMake did not find one.
hence my question about the package dependencies in gRPC/CMakeLists.txt
Steve very kindly explained to how to get hold of Protobuf3, but what about the first one (xrootdclient)?
So, I am not trying to build the CTA rpms because I didnt know that I had to do this in the context of the instructions I am trying to follow. Do the CTA rpms need to be installed on the CASTOR headnodes too? If so, please point me to the relevat repo?
I think we can now stop using direct e-mails and start communicating using this forum.
The first thing you need is CTA RPMs. The current solution is for you to build them. I have started discussions here at CERN to see whether or not we can start distributing binary RPMs. We’ll get back to you with our decision.
In the mean time assuming you have to build the CTA RPMs, you first need to build the source RPM. We have already discussed this in our e-mail exchange but I am copy pasting here what I wrote so others can follow if they’re interested:
You need to type the following on a CC7 box, preferably NOT a CASTOR production node or even a pre-production node. A box dedicated to development would be better.
sudo yum install -y cmake gcc-c++ git make rpm-build
git clone https://gitlab.cern.ch/cta/CTA.git
mkdir build_srpm
cd build_srpm
cmake -DPackageOnly:Bool=true ../CTA
make cta_srpm
Are you building on vanilla CentOS7 or CERN CentOS7 (CC7)?
I’m a bit confused by the part of your statement that says “and the whole rpm develpment tree”, does this mean that you have successfully executed the following?
I wanted you stop at the point where you had successfully created the source RPM. The next step I wanted you to take was to indeed run yum-builddep RPM/SRPMS/cta-0-1.src.rpm and then send me the results/errors. If you are using CC7 then I would expect to see exactly the following errors. Please note that with CC7, the protobuf3 RPMs would be installed with no problems. You encountered problems with the protobuf3 RPMs and this is exactly why we must both move together at exactly the same pace so I can see exactly what is wrong.
[root@itctabuild02 build_srpm]# yum-builddep RPM/SRPMS/cta-0-1.src.rpm
Loaded plugins: fastestmirror, kernel-module, ovl, protectbase, versionlock
Enabling base-source repository
Enabling cern-source repository
Enabling epel-source repository
Enabling extras-source repository
Enabling updates-source repository
Loading mirror speeds from cached hostfile
base-source | 2.9 kB 00:00:00
cern-source | 3.4 kB 00:00:00
epel-source | 3.5 kB 00:00:00
extras-source | 3.4 kB 00:00:00
updates-source | 3.4 kB 00:00:00
(1/6): extras-source/7/primary_db | 143 kB 00:00:00
(2/6): cern-source/7/primary_db | 128 kB 00:00:00
(3/6): epel-source/updateinfo | 1.0 MB 00:00:00
(4/6): base-source/7/primary_db | 974 kB 00:00:00
(5/6): updates-source/7/primary_db | 1.8 MB 00:00:00
(6/6): epel-source/primary_db | 2.4 MB 00:00:01
358 packages excluded due to repository protections
Loading mirror speeds from cached hostfile
Getting requirements for cta-0-1.src
--> Already installed : cmake-2.8.12.2-2.el7.x86_64
--> Already installed : redhat-rpm-config-9.1.0-88.el7.centos.noarch
--> 1:xrootd-client-devel-4.12.2-3.el7.x86_64
--> 1:xrootd-devel-4.12.2-3.el7.x86_64
--> 1:xrootd-server-devel-4.12.2-3.el7.x86_64
--> 1:xrootd-private-devel-4.12.2-3.el7.x86_64
--> protobuf3-compiler-3.3.1-2.el7.cern.x86_64
--> protobuf3-devel-3.3.1-2.el7.cern.x86_64
--> gmock-devel-1.6.0-3.el7.noarch
--> gtest-devel-1.6.0-2.el7.x86_64
--> sqlite-devel-3.7.17-8.el7_7.1.x86_64
--> libcap-devel-2.22-11.el7.x86_64
--> binutils-devel-2.27-43.base.el7_8.1.x86_64
--> 1:openssl-devel-1.0.2k-19.el7.x86_64
--> cryptopp-devel-5.6.2-10.el7.x86_64
--> libuuid-devel-2.23.2-63.el7.x86_64
--> json-c-devel-0.11-4.el7_0.x86_64
--> libattr-devel-2.4.46-13.el7.x86_64
--> 1:mariadb-devel-5.5.65-1.el7.x86_64
--> postgresql-devel-9.2.24-4.el7_8.x86_64
--> 1:valgrind-3.15.0-11.el7.x86_64
--> 1:valgrind-devel-3.15.0-11.el7.x86_64
--> Already installed : systemd-219-62.el7.x86_64
Error: No Package found for grpc
Error: No Package found for grpc-devel
Error: No Package found for grpc-plugins
Error: No Package found for grpc-static
Error: No Package found for librados-devel = 2:14.2.8
Error: No Package found for libradosstriper-devel = 2:14.2.8
Error: No Package found for oracle-instantclient19.3-devel
[root@itctabuild02 build_srpm]#
I am running 3.10.0-1127.13.1.el7.x86_64 on an OpenStackVM that I use for XRootd dev,
I ran only the commands you listed in the previous post; I didnt not run yum-builddep
Sorry, my wording was inaccurate. With the term “development tress”, I meant that inside /root/CTA/build_srpm dir the following dirs were created (with whatever content) by cmake
tmp
RPMS
BUILD
SOURCES
BUILDROOT
SRPMS
SPECS
Scientific Linux release 7.8 (Nitrogen)
and from yum-builddep
[root@host-172-16-113-181 build_srpm]# yum-builddep ./RPM/SRPMS/cta-0-1.src.rpm
Loaded plugins: langpacks, post-transaction-actions, priorities, versionlock
2907 packages excluded due to repository priority protections
Excluding 42 updates due to versionlock (use "yum versionlock status" to show them)
Getting requirements for cta-0-1.src
--> Already installed : cmake-2.8.12.2-2.el7.x86_64
--> Already installed : redhat-rpm-config-9.1.0-88.sl7.noarch
--> Already installed : 1:xrootd-devel-4.11.10-1.rc2.el7.x86_64
--> gmock-devel-1.6.0-3.el7.noarch
--> gtest-devel-1.6.0-2.el7.x86_64
--> sqlite-devel-3.7.17-8.el7_7.1.x86_64
--> libcap-devel-2.22-11.el7.x86_64
--> binutils-devel-2.27-43.base.el7.x86_64
--> Already installed : 1:openssl-devel-1.0.2k-19.el7.x86_64
--> cryptopp-devel-5.6.2-10.el7.x86_64
--> Already installed : libuuid-devel-2.23.2-63.el7.x86_64
--> Already installed : json-c-devel-0.11-4.el7_0.x86_64
--> libattr-devel-2.4.46-13.el7.x86_64
--> 3:mariadb-devel-10.1.20-2.el7.x86_64
--> postgresql-devel-9.2.24-2.el7.x86_64
--> 1:valgrind-3.15.0-11.el7.x86_64
--> 1:valgrind-devel-3.15.0-11.el7.x86_64
--> Already installed : systemd-219-73.el7.1.x86_64
Error: No Package found for grpc
Error: No Package found for grpc-devel
Error: No Package found for grpc-plugins
Error: No Package found for grpc-static
Error: No Package found for librados-devel = 2:14.2.8
Error: No Package found for libradosstriper-devel = 2:14.2.8
Error: No Package found for oracle-instantclient19.3-devel
Error: No Package found for protobuf3-compiler >= 3.3.1
Error: No Package found for protobuf3-devel >= 3.3.1
Error: No Package found for xrootd-client-devel >= 1:4.10.0
Error: No Package found for xrootd-private-devel >= 1:4.10.0
Error: No Package found for xrootd-server-devel >= 1:4.10.0
Not sure why it cant find librados-devel and libradosstriper-devel; I already have them:
[root@host-172-16-113-181 build_srpm]# rpm -qa | grep librados
librados2-14.2.9-0.el7.x86_64
libradospp-devel-14.2.9-0.el7.x86_64
libradosstriper1-14.2.9-0.el7.x86_64
libradosstriper-devel-14.2.9-0.el7.x86_64
librados-devel-14.2.9-0.el7.x86_64
Best,
George
Wow you are using SLC7! This explains some of the issues.
So for SL7 there is no “default” protobuf3. OK now I know what I’m dealing with. I’m going to call it a day and I’ll get back to you.
I see that on SL7 that yum-utils needs to be installed in order to get the yum-builddep command. So the instructions so far are:
yum install -y cmake gcc-c++ git make rpm-build yum-utils
git clone https://gitlab.cern.ch/cta/CTA.git
mkdir build_srpm
cd build_srpm
cmake -DPackageOnly:Bool=true ../CTA
make cta_srpm
yum-builddep RPM/SRPMS/cta-0-1.src.rpm
So now I have the same environment as you:
[root@5f1b4cd84edf build_srpm]# cat /etc/redhat-release
Scientific Linux release 7.8 (Nitrogen)
[root@5f1b4cd84edf build_srpm]#
[root@5f1b4cd84edf build_srpm]# yum-builddep RPM/SRPMS/cta-0-1.src.rpm
Loaded plugins: ovl
Enabling repos-source repository
Enabling sl-source repository
Getting requirements for cta-0-1.src
--> Already installed : cmake-2.8.12.2-2.el7.x86_64
--> Already installed : redhat-rpm-config-9.1.0-88.sl7.noarch
--> sqlite-devel-3.7.17-8.el7_7.1.x86_64
--> libcap-devel-2.22-11.el7.x86_64
--> binutils-devel-2.27-43.base.el7_8.1.x86_64
--> 1:openssl-devel-1.0.2k-19.el7.x86_64
--> libuuid-devel-2.23.2-63.el7.x86_64
--> json-c-devel-0.11-4.el7_0.x86_64
--> libattr-devel-2.4.46-13.el7.x86_64
--> 1:mariadb-devel-5.5.65-1.el7.x86_64
--> postgresql-devel-9.2.24-4.el7_8.x86_64
--> 1:valgrind-3.15.0-11.el7.x86_64
--> 1:valgrind-devel-3.15.0-11.el7.x86_64
--> Already installed : systemd-219-73.el7_8.8.x86_64
Error: No Package found for cryptopp-devel >= 5.6.2
Error: No Package found for gmock-devel >= 1.5.0
Error: No Package found for grpc
Error: No Package found for grpc-devel
Error: No Package found for grpc-plugins
Error: No Package found for grpc-static
Error: No Package found for gtest-devel >= 1.5.0
Error: No Package found for librados-devel = 2:14.2.8
Error: No Package found for libradosstriper-devel = 2:14.2.8
Error: No Package found for oracle-instantclient19.3-devel
Error: No Package found for protobuf3-compiler >= 3.3.1
Error: No Package found for protobuf3-devel >= 3.3.1
Error: No Package found for xrootd-client-devel >= 1:4.10.0
Error: No Package found for xrootd-devel >= 1:4.10.0
Error: No Package found for xrootd-private-devel >= 1:4.10.0
Error: No Package found for xrootd-server-devel >= 1:4.10.0
[root@5f1b4cd84edf build_srpm]#
Had to create another VM (Scientific Linux release 7.7 ) to get exact librados striper versions that your spec requires (14.2.8-0) because on the other VM I had installed the 14.2.9-0 for xrootd dev purposes.
Hi George, glad to see everything is fine so far, now we have to complete the final step of building the binary rpms. Please enter the following commands to do so:
Assuming your current directory contains the following sub-directories:
CTA
build_srpm
Please enter the following taking note that this time we are entering build_rpm and not build_srpm:
mkdir build_rpm
cd build_rpm
cmake ../CTA
make cta_rpm
Building the binary RPMs is going to automatically run some unit tests. In some environments those tests fail due to a missing search line is /etc/resolv.conf. Before going into the details of this I would first like to see if your unit tests pass or fail.
Fantastic news, so now you can start installing the CTA RPMs as you require them. I am a bit worried that you are going to encounter problems when installing the cta-migration-tools RPM. Keep me posted of any errors you encounter.
For testing, it is sufficient to install the tools on the destination (EOS) headnode.
For the final migration, cta-migration-tools needs to be installed on the CASTOR headnode (nameserver), to do the final step of disabling the tapes in CASTOR.
I am planning to use our pre-production CASTOR instances to get familiar with the migration procedure as described in the index-md file you sent me.
After each step, an affirmative statement follows if (I think) I understand this step and a question in the case I dont. Apologies in advance for the length of this post! Many thanks!
Install the CTA Catalogue utilities
Install cta-catalogueutils-0-1.el7.x86_64.rpm on one of the CASTOR stagers or the nameserver (since they can connect to the Oracle DB)
Create a file catalogue.conf containing oracle:/@cta
This is done again in the CASTOR stager or nameserver
What should be the ?
I assume that the passwd for the CTA schema is selected by our DBAs?
Create the new DB schema for the CTA catalogue:
Looks straightforward
Create the CTA admin users
What is this step for? Looks like it is related to step 2. above?
Start a CTA Frontend, configured to connect to the same DB as above.
What exactly is a CTA Frontend? A seperate node with the cta-frontend rpm installed?
Create the Virtual Organisations:
In my case, I guess I can define a “mock” VO?
#Install the PL/SQL procedures
The PL/SQL scripts should be installed from the CTA repo:
OK
Installation PL/SQL procedures in CASTOR DB
Question: the castorns_ctamigration_schema.sql and castorvmgr_ctamigration_schema.sql
are installed in the NS and VMGR schemas respectively? If this is the case, in your example
the first ‘castor’ string is DbCnvSvcuser and the second is the DbCnvSvc dbName
from the /etc/castor/NSCONFIG containing the line: CERT_NS/*******@certdb
Install PL/SQL procedures in CTA DB
Install the oracle_catalogue_castor_migration.sql CTA schema created in step “Create the new DB schema for the CTA catalogue”
This step is to configure the CTA DB. You can perform it from any machine which can reach the CTA DB. I would suggest doing it from your CTA Frontend.
This allows you to use the cta-admin tool to contact the CTA Frontend to populate the virtualorganization and mediatype tables.
The CTA Frontend is the XRootD server which is used to accept all archival and retrieval requests and administrator commands sent to CTA. The Frontend communicates with the DB and the object store.
If you don’t know what the Frontend is, I would suggest that you should leave the migration tools to one side and set up a fully running CTA test instance somewhere first. This is a necessary prerequisite before you start the migration.
Yes, the VO name can be whatever you like. It should match the --vo option in the tapepool_castor_to_cta.py script.
I suppose so, I am not familiar with your CASTOR configuration.
See documentation in the eos-ns manual page.
The token is a shared secret (password) between the gRPC client (the CTA migration tools) and the server (the EOS MGM).
For the test migration it is not necessary to install the migration tools on the CASTOR nameserver. I would recommend that you do it from your test EOS headnode.