- Solaris 2.8 - AIX 5.1 - Linux RedHat 8.0 (2.4.20-30.8) - Linux Fedora Core 1 (2.4.22-1.2197.nptl) [our build platform] - Linux RedHat Enterprise 3 (2.4.21-15.04.ELsmp)OPUS has also been partially tested on RedHat 9.0. Most blackboard functionality exists, but the Managers may encounter difficulties depending on kernel version. Given the RedHat release climate, unless a strong request for support of RH9.x is made, OPUS may not support it (or 8.0) in the future.
The OPUS Managers have also been tested on these platforms, which include Windows. In addition, plans are being discussed to port the OPUS blackboards to some version of Windows.
In addition, certain types of OPUS task processes can be run on basically any operating system, while communicating with OPUS servers running on one of the operating systems listed above.
The processes that form your pipeline will determine how fast a CPU and how much memory are required. The disk space requirements of OPUS vary depending on the type of installation desired and the size of your pipeline processes and datasets. In general, your system should be capable of running the pipeline processes outside of the OPUS environment to a satisfactory performance level; this will ensure satisfactory performance within the OPUS environment.
Additional requirements:
Additional space is required to run the sample pipeline and to install the Java managers.
In fact, OPUS was specifically designed to allow for distributed processing across several networked computers. The primary requirement is that all computers that are part of the OPUS pipeline be able to see the OPUS configuration files. This implies that there must be common disk space shared among all of the networked computers (an NFS mounted volume, for example). The one possible freeing exception to this requirement is described here.
In addition, Unix systems must be reachable with a remote-shell facility such as rsh (i.e., the rsh service must be activated in your inet daemon configuration file, and the appropriate rhosts files must be present). For additional security, ssh or another similar package may be used in place of rsh.
However, a user does need to have an account on each machine that is to run OPUS processes. Also, for pipelines employing multiple nodes across a network, all accounts must allow rsh or ssh access. You can test whether or not your accounts are configured to allow rsh or ssh access by attempting to execute a remote command with rsh (substitute ssh for rsh in the following example to test if the secure shell software is installed and configured at your site). For example, suppose you have accounts on two machines, foo and bar, that will run OPUS processes. To test whether you have rsh access you might try the following command from foo:
rsh bar lsYou might have to use a full domain name specification (e.g., bar.yourdomain.edu) if bar is not on your local network.
If successful, you should get a directory listing of your login directory on bar. If you received a "Permission denied" error message, then most likely you do not have a .rhosts file on bar. Refer other problems to your system administrator.
If you are upgrading from a previous version of OPUS, read this first.
The installation script "opus_install" in the install directory will guide you through the rest of the base installation, user account configuration, and installation of the OPUS Java Managers.
Execute the "opus_install" script by entering:
[cd_path]/install/opus_installwhere "[cd_path]" is the path to where your CD-ROM is mounted, and follow the on-screen instructions. The opus_install script presents you with the following options:
-------------------------------------------- - Welcome to the OPUS Installation Script! - -------------------------------------------- Please choose from the following items: [1] Install OPUS to run from a local disk [2] Configure a user account to use OPUS for the first time [3] Install the OPUS Managers [4] NOTE! about upgrading from previous versions [5] Exit Selection:Choose option 1 to begin the installation process.
Should installation fail at some point, it is possible that only a partial installation was done or that one or more of the OPUS files was left incomplete. A list of all files that the installer copies to your system for each installation option is provided in the file named
install_tree_disk.txtin the install directory of the OPUS CD-ROM. Refer to these files to verify that a complete installation was done if you encounter problems during installation.
/bin/rm -rf /usr/local/opusThis will remove the OPUS distribution from your disk, but it will not remove modifications to individual users' .cshrc files or remove their definitions, home or path directories. Individual user directories (e.g
~/opus_test/
)
can be removed by
each OPUS user in the same manner as the OPUS distribution using
rm. Users can remove OPUS modifications to their
.cshrc files by deleting lines containing "041367OPUS" with
a text editor. The install script will save the existing version
of your .cshrc file, so you have a backup.
To uninstall the Java managers on Windows platforms, follow the usual procedure to remove programs from your system (the installation process installs an uninstaller for the Java managers). On Unix platforms, run the script Uninstall_XOpus2001 in the UninstallerData subdirectory of the OpusMgrs directory you installed previously.
# (041367OPUS:1) ******************************************************* # (041367OPUS:2) THE FOLLOWING LINES WERE ADDED BY THE OPUS_INSTALL. # (041367OPUS:3) SCRIPT. DO NOT MODIFY ANYTHING BETWEEN THE *'S # (041367OPUS:4) # (041367OPUS:5) OTREE /home/mydir/opus/ set path = ( /cdrom/bin/sparc_solaris /cdrom/com/ $path ) source /home/mydir/opus_test/definitions//opus_login.csh # (041367OPUS:7) # (041367OPUS:8) *******************************************************It now will also add a line to source your opus_login.csh file. Be certain to source your .cshrc file before you begin OPUS.
Unix systems can also use the rsh or ssh service to initiate processes on all nodes. This is only necessary if you are distributing your pipeline processing across multiple nodes. See the notes in the $OPUS_DEFINITIONS_DIR:opus_corba_objs file. If you will be running your pipeline on a single node (the default), you may skip the rest of this RSH/SSH section.
To use the rsh service, you must create a .rhosts file in the login directory of each account on each node that will run OPUS pipeline processes. This file should contain entries for every node. Refer to the rhosts man page for a detailed description of this file. In general, each .rhosts file should contain two entries, separated by white space, per line. The first entry specifies a host name, the second entry a login name. Each line grants rsh access to your account from the host named in the first entry by the user named in the second entry, bypassing the normal password authentication procedure.
For example, to allow access to your account smith on foo by user smith on bar, the following line should be added to the .rhosts file on foo:
bar smith
SSH
.rhosts
file.
You might want to use ssh in place of rsh for
additional security. The ssh software package is available for many UNIX
systems (consult your system administrator as to the availability of
ssh at your site). To configure OPUS to use ssh in place of rsh,
ssh nodename ls
" .
If you continue to receive a prompt during this step, the OPUS servers
will not run correctly. See your system administrator for information
on how to set up ssh (i.e. how to generate passphrase-less key pairs).
ForwardX11=no
" to your ~/.ssh/config file.
(On some systems, the correct syntax is "ForwardX11 no
".)
One symptom of this problem is if you are not able to start processes
via the PMG, i.e. if they appear frozen at "starting".
Host
" line, and a "User
" line.
NOTE: Names of nodes that are to be used for OPUS pipeline processing have an initial limit of 20 characters. This limitation is driven by the amount of space allotted for the process name in the "NODE" field of the process status entry. You can, however, increase the allowable size of the node name field.
The directory structure on the CD-ROM contains the following directories. When you run the installation script to install onto your local disk, these directories will be created:
bin/ ! executables and scripts bin/ibm_aix/ ! ...for the AIX platform bin/linux/ ! ...for the Linux platform bin/sparc_solaris/ ! ...for the Solaris platform bin/java ! Java jar files used by OPUS bin/JavaMgrs ! Java manager installers com/ ! command procedures or scripts (fxlogin.csh) dat/ ! OPUS version information db/ ! database files definitions/ ! OPUS resource files definitions/unix/ ! OPUS resource files; unix specific gif/ ! Input data for the sample pipeline hlp/ ! Help files for the applications html/ ! HTML directory including opusfaq.html inc/ ! OAPI header files install/ ! Installation scripts and tar files lib/ ! OAPI and 3rd party libraries lib/ibm_aix/ ! ...for the AIX platform lib/sparc_solaris/ ! ...for the Solaris platform lib/linux/ ! ...for the Linux platform src/ ! sample source code (IDL for example programs, etc)In addition each user will have his or her own set of directories for local operations. Except for the definitions directory which can contain files that override the delivered definitions, you should consider the contents of these directories as temporary: they can and should be emptied prior to using the OPUS sample pipeline:
~/opus_test/home/ !contains process log and status files ~/opus_test/home/lock/ !contains lock files dynamically created during PSF updates ~/opus_test/definitions/ !overrides the delivered definitions ~/opus_test/g2f/input/ !copy the data you want to test here ~/opus_test/g2f/obs/ !contains the path-specific observation status files ~/opus_test/g2f/obs/lock/ !contains lock files dynamically created during OSF updates ~/opus_test/g2f/fits/ !contains the output of the sample pipeline ~/opus_test/quick/input/ !A second path simulating quick-look data ~/opus_test/quick/error/ ~/opus_test/quick/obs/ ~/opus_test/quick/obs/lock/ ~/opus_test/quick/fits/ ~/opus_test/repro/input/ !A third path simulating reprocessing. ~/opus_test/repro/obs/ ~/opus_test/repro/obs/lock/ ~/opus_test/repro/fits/
For the sample pipeline discussed below, the opus_login.csh file looks something like this:
#!/bin/csh -X # #----------------------------------------------------------------------------- # # OPUS login # # This file is a template of opus_login.csh. Any of these variables can # be stretched through your own area. Also you can add any additional # variables to this file for your own application. # # Your customized version of this file should be placed in the directory # you define below as opus_definitions_dir. # # ************************************************************************** # ************************************************************************** # ********* ******** # ********* Look for the angle brackets <...> below to find where ******** # ********* to insert parameters for your run-time environment. ******** # ********* ******** # ************************************************************************** # ************************************************************************** # #=================== BEGINNING OF USER-DEFINED VARIABLES ===================== #----------------------------------------------------------------------------- # Define variables for YOUR shell environment. Examples of every user- # defined variable precede the actual definitions. Do NOT copy these # verbatim. Use disks and directory trees in YOUR OWN ENVIRONMENT. Replace # all angle brackets <...> and their contents with the appropriate values. # # setenv SOGS_DISK <"/home/smith/opus/ /usr/local/opus/"> setenv SOGS_DISK "/info/devcl/pipe/heller/sample//opus/" # # setenv OPUS_DEFINITIONS_DIR <SOGS_DISK:/definitions/> setenv OPUS_DEFINITIONS_DIR "/home/heller//opus_test//definitions/ SOGS_DISK:/definitions/unix/ SOGS_DISK:/definitions/" # # location for PSTAT files and process log files # setenv OPUS_HOME_DIR </home/smith/> setenv OPUS_HOME_DIR /home/heller//opus_test/home// # # set the paths to the remote shell utility (rsh-compatible) and remote # copy utility (rcp-compatible) to be used by OPUS setenv OPUS_REMOTE_SHELL `which rsh` setenv OPUS_REMOTE_COPY `which rcp` # #====================== END OF USER-DEFINED VARIABLES ========================= # # location for default X-resource files (pmg and omg) if ( $?XUSERFILESEARCHPATH ) then setenv XUSERFILESEARCHPATH ${XUSERFILESEARCHPATH}:$HOME/%N.dat else setenv XUSERFILESEARCHPATH $HOME/%N.dat endif # # location for shared libraries if ( $?LD_LIBRARY_PATH ) then setenv LD_LIBRARY_PATH /info/devcl/pipe/heller/sample//opus/lib/linux/:$LD_LIBRARY_PATH else setenv LD_LIBRARY_PATH /info/devcl/pipe/heller/sample//opus/lib/linux/ endif # set fxlogin = `osfile_stretch_file SOGS_DISK:/com/fxlogin.csh` source $fxlogin
Option 2 also modifies your .cshrc
script in order to add the OPUS
executable directories to your default path. Since this path is
operating system dependent, you will need to modify the entries that
"opus_install" makes to add support for all versions of Unix you plan
to use. In particular, you will need to add one "set path" command
per Unix type, and wrap each of these commands in if-else statements
so that the correct "set path" command is executed for each
operating system. All of this assumes that your login or home
directory is the same for each Unix type; if they are not, then you
need only copy the additions "opus_install Option 2" makes to your
.cshrc
on the host you ran it on to the .cshrc
files in your other
login directories with appropriate changes to the "set path" command
bin directory (see below).
Each line that "opus_install" adds to your .cshrc
file is indexed by
the string "041367OPUS" so that you can easily identify the section
of this file pertaining to OPUS. Within the block of statements
labeled by this key, you will see the "set path" command mentioned
above. For example, if you ran "Option 2" of "opus_install" from a
Solaris workstation, the "set path" command line might look like:
set path = ( /home/me/opus/bin/sparc_solaris/ $path ) # (041367OPUS:6)
The exact directory specification will vary depending on where you installed OPUS, but the important feature at present is the directory "sparc_solaris" under "bin". That is the location of the Solaris OPUS executables. For Linux systems, the directory is "bin/linux" and for AIX, it is "bin/ibm_aix".
You need to isolate this "set path" command so that it is executed only under Solaris, and add other "set path" commands, pointing to the correct bin directory, that are executed only under those operating systems. To accomplish this, duplicate the existing line exactly as many times as necessary for the other operating systems, but replace the directory under "bin" with the other operating system labels (again, "sparc_solaris" for Solaris, "linux" for Linux, and "ibm_aix" for AIX). If you were planning to run under all three supported Unix types, your changes would look like:
set path = ( /home/me/opus/bin/sparc_solaris/ $path ) # (041367OPUS:6) set path = ( /home/me/opus/bin/ibm_aix/ $path ) # (041367OPUS:6) set path = ( /home/me/opus/bin/linux/ $path ) # (041367OPUS:6)
Do not modify the label "(041367OPUS:6)", and be careful with the parentheses in the "set path" command: there must be spaces on both sides of them.
Next, you need to wrap each of these statements in if-else clauses so that only one of them is executed per operating system. To do this, make the following changes:
setenv HOSTTYPE = `uname -s` if ( $HOSTTYPE == SunOS ) then set path = ( /home/me/opus/bin/sparc_solaris/ $path ) # (041367OPUS:6) else if ( $HOSTTYPE == AIX ) then set path = ( /home/me/opus/bin/ibm_aix/ $path ) # (041367OPUS:6) else if ( $HOSTTYPE == Linux ) then set path = ( /home/me/opus/bin/linux/ $path ) # (041367OPUS:6) endif
HOSTTYPE is pre-defined in the tcsh, but it has different values from `uname -s` so it is overwritten here. For example, the tcsh-defined HOSTTYPE corresponding to sparc_solaris is "sun4", and the tcsh-defined HOSTTYPE corresponding to Linux may be "i686-linux". Again, make sure there are spaces surrounding each of the parentheses.
Finally, you must perform the same customization to your opus_login.csh
file for the LD_LIBRARY_PATH
environment variable. Add the appropriate
if/else tree to set LD_LIBRARY_PATH
for each operating system following
the format of the existing entry.
The OPUS distribution does not currently include scripts to automatically upgrade your existing version of OPUS. Instead, the following steps must be taken in order to install the latest version of OPUS :
The top level /bin/ directory on the CDROM contains the binaries which perform installation. The OpusMgrs.bin file is for unix platforms, and the OpusMgrs.exe file is for Windows platforms.
Each binary is a graphical user interface (GUI) which prompts you for installation information. You may launch this GUI from option 3 of the install script, or you may run the GUI directly from the CDROM.
- Solaris 2.8 - Linux RedHat 8.0, Fedora Core 1, RedHat Enterprise 3 - AIX 5.1 - Windows 98, NT, 2000, XPBecause the managers require some Java Native Interface (JNI) code to link with the CORBA servers, the user interface for other platforms has not been developed yet.
% OpusMgrs.bin
This will bring up the InstallAnywhere logo and begin the installation process.
Next you are given a list of the Java VM (Virtual Machines) which the InstallAnywhere application found on your machine.
You should select a version of Java which is at least at the level of 1.3. The OPUS Managers do not work with older versions of Java.
If you do not yet have the Java Virtual Machine, you can download them freely from the Sun Java site.
Finally you are asked whether you wish a "Typical Install", or a "Full Install". The difference, which applies only to the Windows platform, is that the Full install includes the dynamic link libraries (DLL) which are required by the OPUS Managers to communicate with the CORBA Servers. The first time you install on any machine, you should select the "Full Install". If you receive upgrades to the OPUS Managers which did not require rebuilds of those object libraries, you can select the "Typical Install" option.
Requirements for SFTP:
SFTP capability does not come standard with Java, but it is freely available in Java from a number of places. The current OPUS Managers support the use of J2SSH ver. 0.2.7 - an open source Java SSH2 package originally developed by 3SP.com (not the "Maverick" product, but the free J2SSH package). It is available for download by SourceForge.net. If the link above does not work, try the SourceForge main page and search for "sshtools" or "j2ssh". Download J2SSH ver. 0.2.7.
Note that OPUS 4.5 supported J2SSH v0.2.2, but the current OPUS supports v0.2.7, and the two versions of J2SSH are incompatible.
To use J2SSH in OPUS, you will need to download and build the J2SSH package, and install 3 jar files (note that some are renamed):
j2ssh/dist/lib/j2ssh-core-0.2.7.jar
to
[OPUSMgrs-installation-dir]/j2ssh-core.jar
j2ssh/lib/commons-logging.jar
file to
[OPUSMgrs-installation-dir]/commonslog.jar
[OPUSMgrs-installation-dir]/log4j.jar
If you are not yet using Java 1.4, see these extra requirements for Java 1.3.
All of the above files can, if you wish, simply be installed into the extensions
area of your Java distribution instead, in which case the names of the jar files
do not matter. To do so, simply drop them into the
[java-jre-dir]/lib/ext
directory, or see your system administrator.
How to enable SFTP:
Once the necessary jar files have been installed, SFTP can be enabled in OPUS in either of two ways:
filexfer.method=sftp
, or
This will enable SFTP instead of FTP for all file transfers for that session.
Debugging Help:
java.lang.NoClassDefFoundError: javax/crypto/NoSuchPaddingExceptionthen you have not yet installed JCE1.2.2 as an extension to your Java 1.3 distribution.
ERROR com.sshtools.j2ssh.transport.TransportProtocolCommon - The Transport Protocol thread failed com.sshtools.j2ssh.transport.AlgorithmNotSupportedException: DH KeyPairGenerator not availablethen you have not yet correctly registered the "SunJCE" provider, as described in the jce1.2.2 documentation. Use the static method.
On the Unix platforms you will see a link in your home directory to both the PMG task and the OMG task. All you need to do is type those names. See the more complete explanation of the Managers in the following sections.
Please note: you must have started the OPUS Servers before the OPUS Managers can be activated!