How to: |
The following components are needed to use the adapter for Hadoop/Hive:
The location of Java must be specified in an environment variable.
If you are using Linux, add a line to your profile with the location where Java is installed. For example:
export JAVA_HOME=/usr/lib/jvm/jre-1.7.0
If you have JDK installed:
export JAVA_HOME=/usr/lib/jvm/jdk-1.7.0
If you are using Windows, right-click Computer and click Properties. Then click Advanced System Settings and click Environment Variables. Add the locations to your PATH variable. For example:
C:\Program Files\Java\jdk7\bin\server;C:\Program Files\Java\jdk7\bin;
The location of the Apache Spark jar files must be specified to the server. If you are running the server on the same system as the Hadoop and Hive server, you can specify their location. If you are running the server on another system, copy the files listed below to some location on your system and specify their location.
This can be done in the system CLASSPATH or in the DataMigrator or WebFOCUS Reporting Server IBI_CLASSPATH variable as follows:
or
From the Data Management Console, expand the Workspace folder.
The Java Services Configuration page opens.
In the IBI_CLASSPATH box, enter the full location of the Spark files shown below. The file names must be entered one per line.
If you are installing the adapter on a different system than where Hadoop and Hive are installed, copy the jar files to a location on that system.
Enter the location on your system that matches your installation.
hive-jdbc-<version>-standalone.jar hadoop-common.jar
WebFOCUS | |
Feedback |