Topics: |
How to: |
Reference: |
Configuring the adapter consists of specifying connection and authentication information for each of the connections you want to establish.
You can configure the adapter from either the Web Console or the Data Management Console.
or
From the Data Management Console, expand the Adapters folder.
In the DMC, the Adapters folder opens. In the Web Console, the Adapters page opens showing your configured adapters.
Driver |
JDBC Driver Name |
---|---|
Apache Spark |
org.apache.hive.jdbc.HiveDriver |
The following image shows an example of the configuration settings used:
The Adapter for Hadoop/Hive is under the SQL group folder.
Logical name used to identify this particular set of connection attributes. The default is CON01.
Is the URL to the location of the data source.
The URL used depends on what type of server you are connecting to. See the table below for examples.
Server |
URL |
---|---|
Hiveserver2 |
jdbc:hive2://server:10000/default |
Kerberos Hiveserver2 (static) |
jdbc:hive2://server:10000/default; principal=hive/server@REALM.COM |
Kerberos Hiveserver2 (user) |
jdbc:hive2://server:10000/default;principal=hive/ server@REALM.COM; auth=kerberos;kerberosAuthType=fromSubject |
where:
Is the DNS name or IP address of the system where the Hive server is running. If it is on the same system, localhost can be used.
Is the name of the default database to connect to.
Is the default port number Spark Thrift Server is listening on if not specified when the Hive server is started.
For a Kerberos enabled Hive server, this Is the name of your realm.
Is the name of the JDBC driver, org.apache.hive.jdbc.HiveDriver.
Defines the additional Java Class directories or full-path jar names which will be available for Java Services. This value may be set by editing the communications file or in the Web Console. Using the Web Console, you can enter one reference per line in the input field. When the file is saved, the entries are converted to a single string using colon (:) delimiters for all platforms. OpenVMS platforms must use UNIX-style conventions when setting values (for example, /mydisk/myhome/myclasses.jar, rather than mydisk:[myhome]myclasses.jar). When editing the file manually, you must maintain the colon delimiter.
There are two methods by which a user can be authenticated when connecting to a database server:
Primary authorization ID by which you are known to the data source.
Password associated with the primary authorization ID.
Select a profile from the drop-down menu to indicate the level of profile in which to store the CONNECTION_ATTRIBUTES command. The global profile, edasprof.prf, is the default.
If you wish to create a new profile, either a user profile (user.prf) or a group profile if available on your platform (using the appropriate naming convention), choose New Profile from the drop-down menu and enter a name in the Profile Name field (the extension is added automatically).
Store the connection attributes in the server profile (edasprof).
Topics: |
Connections to a Hive server with Kerberos enabled can be run in one of two ways:
This mode is useful for testing, but is not recommended for production deployment.
To setup connections to a Kerberos enabled Spark Thrift Server instance if each user has their own connection, the Reporting Server has to be secured. The server can be configured with security providers PTH, LDAP, DBMS, OPSYS, or Custom, as well as multiple security provider environments.
In this configuration, all connections to the Spark Thrift Server instance will be done with the same Kerberos user ID derived from the Kerberos ticket that is created before the server starts.
kinit kerbid01
where:
Is a Kerberos ID.
jdbc:hive2://server:10000/default;principal=hive/server@REALM.COM
Set to Trusted.
-Djavax.security.auth.useSubjectCredsOnly=false
Once these steps are completed, the adapter can be used to access a Kerberos-enabled Spark Thrift Server instance.
In this configuration, each connected user has a Hive Adapter connection with Kerberos credentials in the user profile.
ENGINE SQLSPK SET ENABLE_KERBEROS ON
Is the URL to the location of the data source.
Server |
URL |
---|---|
Kerberos Hiveserver2 (User) |
jdbc:hive2://server:10000/default;principal=hive/server@REALM.COM; auth=kerberos;kerberosAuthType=fromSubject |
Set to Explicit.
Enter your Kerberos user ID and password. The server will use those credentials to create a Kerberos ticket and connect to a Kerberos-enabled Spark Thrift Server instance.
Note: The user ID that you use to connect to the server does not have to be the same as the Kerberos ID you use to connect to a Kerberos-enabled Spark Thrift Server instance.
Select your profile or enter a new profile name consisting of the security provider, an underscore and the user ID. For example, ldap01_pgmxxx.
If the server is unable to configure the connection, an error message is displayed. An example of the first line in the error message is shown below, where nnnn is the message number returned.
(FOC1400) SQLCODE IS -1 (HEX: FFFFFFFF) XOPEN: nnnn
Some common errors messages are:
[00000] JDBFOC>> connectx(): java.lang.UnsupportedClassVersionErr : or: org/apache/hive/jdbc/HiveDriver : Unsupported major.minor version 51.0
The adapter requires Java 1.7 or later and your JAVA_HOME points to Java 1.6.
(FOC1500) : ERROR: ncjInit failed. failed to connect to java server: JSS (FOC1500) : . JSCOM3 listener may be down - see edaprint.log for details
The server could not find Java. To see where it was looking, review the edaprint.log and set JAVA_HOME to the actual location. Finally, stop and restart your server.
(FOC1260) (-1) [00000] JDBFOC>> connectx(): java.lang.ClassNotFoundException: org.apache.hive.jdbc.HiveDriver (FOC1260) Check for correct JDBC driver name and environment variables. (FOC1260) JDBC driver name is org.apache.hive.jdbc.HiveDriver (FOC1263) THE CURRENT ENVIRONMENT VARIABLES FOR SUFFIX SQLHIV ARE : (FOC1260) IBI_CLASSPATH : ...
The JDBC driver name specified cannot be found in the jar files specified in IBI_CLASSPATH or CLASSPATH. The names of the jar files are either not specified, or if specified, do not exist in that location.
[08S0] Could not establish connection to jdbc:hive2://hostname:10000: java.net.UnknownHostException: hostname
The server hostname could not be reached on the network. Check that the name is spelled correctly and that the system is running. Check that you can ping the server.
[08S01] Could not establish connection to localhost:10000/default: : java.net.ConnectException: Connection refused
The Spark Thrift server is not listening on the specified port. Start the server if it is not running, and check that the port number is correct.
WebFOCUS | |
Feedback |