Migrating from Legacy V3 to V4 “Built on openSUSE”¶
Rocksor v3 was based on CentOS 7 and is no longer supported. Last updates per channel:
Stable (3.9.2-57) - April 2020
Testing (3.9.1-16) - November 2017
All v3 (and likely earlier) Pools and Configuration Backup and Restore files (once downloaded), are expected to import and restore respectively as they would in a v3 install. It is important to import all pools before uploading and applying a config save file.
As of 3.9.2-56, in common with many NAS systems, the Apple Filing Protocol (AFP) was dropped.
In v4 we auto label the system pool as “ROOT” in line with our JeOS upstream. Previous v3 installs use the label “rockstor_rockstor”. V4 has also removed the inadvertent, in v3, appearance of the “root” share (subvolume).
As this migration requries an operating system re-install, it is imperative that you ensure your backups have been refreshed and verified. Greater hardware compatibility is assumed given the years newer kernel. But there is always the possibility of regressions.
It is also possible, although unlikely, that once you have imported a pool under a newer kernel, it may fail to import rw in the older v3 related kernel.
Use a new system disk¶
If at all possible it is highly advisable to install v4 onto a fresh system disk. This leave open the possibility of re-attaching the old v3 system disk in case of any difficulties. Do not leave an old v3 system disk attached while installing v4: unless you are re-using the same system disk (not advised but entirely possible). A v4 install is significantly faster and simpler than a v3 install, but will destroy all data on the target disk. Hence the advise here to preserve the original system disk and to detach it prior to the v4 install.
If you do end up reverting to your v3 system disk, for whatever reason, do not have more than one Rockstor system disk, of any version, attached simultaneously. This causes known confusion within the current and past Web-UI. We hope to improve this inflexibility in time.
Disconnect all Data disks¶
As a purely precautionary measure, it is highly advisable that all data disks be detached prior to the v4 install. Take careful note of their connections to the host system. This connection concern relates to potential hardware compatibility of these interconnects. Btrfs, our underlying filesystem and device manager, is not normally concerned with such changes. But the existing hardware pairings, assuming prior function, are best noted never-the-less. Ready for the planned re-connection after v4 is installed, updated, and tested successfully. Just in case. After the last shutdown of v3, while attaching the new system disk, be sure to unplug all prior disks. This is simply to avoid an accidental selection of a data pool member for the fresh install.
As stated above, the v4 install will wipe all prior data on the target disk selected. A simple quick mistake at the initial Select Installation Disk step could destroy a pool member’s data inadvertently.
See our Rockstor’s “Built on openSUSE” installer for a step by step explanation and guide. And again, take great care on the early Select Installation Disk (intended v4 target system disk) choice. If the advise above is followed there will only be one newly attached proposed system disk anyway.
Once the new install is in place it is advisable to at least apply all upstream updates. See: Install updates from the Web-UI. Take care to ensure these have all been applied prior to rebooting. The Dashboard can help to indicate this by observing the network and cpu activity. We have an outstanding bug where our ‘wifi like’ busy indicator does not last the duration of the installs.
Make sure that the system does reboot and return as expected before re-attaching all prior pool members, connected as before, and doing the pool import and then optionally a config restore.
V4 Pool/s import¶
V4 btrfs parity raid levels of 5 and 6 are read-only by default. This is an upstream decision and not enacted by Rockstor. See our Redundancy profiles for more information, and our suggested work around if needs must. See also Import unwell Pool in case your pool requires special mount options.
V4 Config restore¶
V4 Config restore is as per v3. See: Configuration Backup and Restore.
You must have first downloaded your v3 saved config as they otherwise reside on the system disk.
Although older config save files are compatible, there has been much work done on extending this features capability. Earlier config saves cover less elements of the system than later ones. E.g. Rock-ons installed and their associated share settings are not included in config saves before 3.9.2-52. Note that Rock-ons restore capability depends upon a non system disk The Rock-ons root share location.
Many bug fixes¶
In the process of moving from a CentOS base to a “Built on openSUSE” one, we have found and fixed a large number of bug, and inherited such things as our Rockstor 4 Installer Recipe that trivially enables highly customised installer creation. We also now have ARM64 (e.g. Pi4 / Ten64) compatibility, baring some Rock-ons, courtesy of openSUSE’s extreme heritage in ARM support.
Also note the following, now we are past the Jump initiative:
In v3 our upstream of CentOS had in turn it’s upstream of RedHat’s RHEL.
In v4 our upstream of openSUSE has in turn it’s increasing binary compatible with SuSE SLES.
So if your prior v3 install had a customization involving a CentOS/RHEL compatibility. You should now, in v4, look first for an openSUSE equivalent and then for a SLES SP3 equivalent. This is most likely only going to affect advanced users and is not a concern for mainly Web-UI users.
Users and default group¶
As we have, between v3 and v4, changes our underlying OS, there are other more subtle differences that may only come to light in time. One such difference is the default use of the “users” group in v4 for newly added users. Our prior CentOS base defaulted to individual user group creation named after the user concerned. It is thought that the newer default is more suited to a shared resource. But this difference may come as a surprise to prior v3 administrator.