Updating to OSG 24¶
OSG 24 (the new series) introduces support for the ARM architecture. Changes required to upgrade from OSG 23 are relatively minor.
-
Compute Entrypoints should be updated to OSG 24 as soon as possible.
-
HTCondor pools and access points should be updated to OSG 24 as soon as possible.
-
All other services (e.g., OSG Worker Node clients, Frontier Squids) should be updated to OSG 24 as soon as possible.
Updating the OSG Repositories¶
-
Prerequisites:
- Consult the access point, compute entrypoint, and/or HTCondor hosts upgrade notes before updating the OSG repositories.
-
Clean the yum cache:
root@host # yum clean all --enablerepo=*
-
Remove the old series Yum repositories:
root@host # yum erase osg-release
This step ensures that any local modifications to
*.repo
files will not prevent installing the new series repositories. Any modified*.repo
files should appear under/etc/yum.repos.d/
with the*.rpmsave
extension. After installing the new OSG repositories (the next step) you may want to apply any changes made in the*.rpmsave
files to the new*.repo
files. -
Update your Yum repositories to OSG 24
-
Update software:
Note
Because configuration updates will be necessary, be sure to turn off any OSG services before updating them. Consult the sections below that match your situation.
root@host # yum update
Warning
- Please be aware that running
yum update
may also update other RPMs. You can exclude packages from being updated using the--exclude=[package-name or glob]
option for theyum
command. - Watch the yum update carefully for any messages about a
.rpmnew
file being created. That means that a configuration file had been edited, and a new default version was to be installed. In that case, RPM does not overwrite the edited configuration file but instead installs the new version with a.rpmnew
extension. You will need to merge any edits that have made into the.rpmnew
file and then move the merged version into place (that is, without the.rpmnew
extension).
- Please be aware that running
-
Continue on to any update instructions that match the role(s) that the host performs.
Updating Container-based OSPool EP Deployments¶
In OSG 24, the opensciencegrid/osgvo-docker-pilot
worker node docker image has been renamed to osg-htc/ospool-ep
.
To upgrade your docker-based worker nodes from OSG 23 to OSG 24, follow the sections below:
Via Docker Run¶
For sites running the container directly via docker,
the EP container can be updated by changing the image name referenced in the docker run
command. All other arguments to the
docker run
command may remain the same.
root@host # docker run <existing docker args> hub.opensciencegrid.org/osg-htc/ospool-ep:24-release
Via RPM¶
For sites running the EP container via rpm installation, the container can be upgraded by updating the RPM.
-
Install the OSG 24 Yum repositories
-
Upgrade the
ospool-ep
rpm:root@host # yum install ospool-ep
-
(Optional) Clean up
/etc/osg/ospool-ep.cfg
:- A bug in the OSG 23 release of ospool-ep required users to add a
WORK_TEMP_DIR
configuration field as a copy of the defaultWORKER_TEMP_DIR
. When upgrading to OSG 24, remove the duplicatedWORK_TEMP_DIR
field.
- A bug in the OSG 23 release of ospool-ep required users to add a
-
Restart the ospool-ep systemctl service:
root@host # systemctl restart ospool-ep
Updating Your OSG Access Point¶
In OSG 24, some manual configuration changes may be required for an OSG Access Point (APs).
HTCondor¶
Consult the HTCondor upgrade section for details on updating your HTCondor configuration.
Restarting HTCondor¶
After updating your RPMs, restart your HTCondor service:
root@host # systemctl restart condor
Updating Your OSG Compute Entrypoint¶
The OSG 24 release series contains HTCondor-CE 24. HTCondor-CE 24 no longer accepts the original job router syntax. If you have custom job routes, you must use the new, more flexible, ClassAd transform job router syntax.
To upgrade your CE to OSG 24, follow the sections below.
Check for possible incompatibilities¶
-
Ensure that you have the latest HTCondor installed (at least HTCondor 23.10.2 or HTCondor 23.0.17).
-
Run the
condor_ce_upgrade_check
script and address any issues found. -
If you have added custom job routes, make sure that you convert any jobs routes to the new, more flexible, ClassAd transform syntax.
-
If you have an HTCondor batch system, also run the
condor_upgrade_check
script and address any issues found. -
Also consult the upgrade documentation for more information.
Turning off CE services¶
-
Register a downtime
-
Before the update, turn off the following services on your HTCondor-CE host:
root@host # systemctl stop condor-ce
Updating CE packages¶
For OSG CEs serving an HTCondor pool
If your OSG CE routes pilot jobs to a local HTCondor pool, also see the section for updating your HTCondor hosts
After turning off your CE's services, you may proceed with the repository and RPM update process.
Starting CE services¶
After updating your RPMs and updating your configuration, turn on the HTCondor-CE service:
root@host # systemctl start condor-ce
Updating Your HTCondor Hosts¶
HTCondor-CE hosts
Consult this section before updating the condor
package on your
HTCondor-CE hosts.
If you are running an HTCondor pool, consult the following instructions to update to HTCondor from OSG 23.
-
Ensure that you have the latest HTCondor installed (at least HTCondor 23.10.2 or HTCondor 23.0.17).
-
Run the
condor_upgrade_check
script and address any issues found. -
Also consult the HTCondor 24.0 upgrade instructions.
You may proceed with the repository and RPM update process.
Getting Help¶
To get assistance, please use the this page.