The Phoenix Query Server provides an alternative means for interaction with Phoenix and HBase. Soon this will enable access from environments other than the JVM.


Phoenix 4.4 introduces a stand-alone server that exposes Phoenix to “thin” clients. It is based on the Avatica component of Apache Calcite. The query server is comprised of a Java server that manages Phoenix Connections on the clients’ behalf. The client implementation is currently a JDBC driver with minimal dependencies. Two transport mechanisms are current supported: JSON and Protocol Buffers (available as of Phoenix 4.7 as the default). There’s also a sqlline script that uses the thin client.

Avatica is still relatively early in its life-cycle. With the introduction of the Protobuf transport, Avatica is moving towards backwards compatibility with the provided thin JDBC driver. There are no such backwards compatibility guarantees for the JSON API.

To repeat, there is no guarantee of backwards compatibility with the JSON transport; however, compatibility with the Protobuf transport is stabilizing (although, not tested thoroughly enough to be stated as “guaranteed”).


The query server and its JDBC client are part of the standard Phoenix distribution. They require no additional dependencies.



The server component is managed through bin/ Its usage is as follows

bin/ [start|stop]

When invoked with no arguments, the query server is launched in the foreground, with logging directed to the console.

The first argument is an optional start or stop command to the daemon. When either of these are provided, it will take appropriate action on a daemon process, if it exists.

Any subsequent arguments are passed to the main class for interpretation.

The server is packaged in a standalone jar, phoenix-server-<version>-runnable.jar. This jar and HBASE_CONF_DIR on the classpath are all that is required to launch the server.


Phoenix provides two mechanisms for interacting with the query server. A JDBC driver is provided in the standalone phoenix-<version>-thin-client.jar. The script bin/ is available for the command line.

The JDBC connection string is composed as follows:


<scheme> specifies the transport protocol used when communicating with the server. The only supported transport at this time is http.

<server-hostname> is the name of the host offering the service.

<port> is the port number on which the host is listening. Default is 8765, though this is configurable (see below).

A full list of options that can be provided via the JDBC URL string is available in the below table

The script bin/ is intended to behave identically to its sibling script bin/ It supports the following usage options.

bin/ [[scheme://]host[:port]] [sql_file]

The first optional argument is a connection URL, as described previously. When not provided, scheme defaults to http, host to localhost, and port to 8765.

bin/ http://localhost:8765

The second optional parameter is a sql file from which to read commands.

Wire API documentation

The API itself is documented in the Apache Calcite project as it is the Avatica API – there is no wire API defined in Phoenix itself.


Protocol Buffer API

For more information in building clients in other languages that work with Avatica, please feel free to reach out to the Apache Calcite dev mailing list.


By default, the Phoenix Query Server executes queries on behalf of the end-user. HBase permissions are enforced given the end-user, not the Phoenix Query Server’s identity. In some cases, it may be desirable to execute the query as some other user – this is referred to as “impersonation”. This can enable workflows where a trusted user has the privilege to run queries for other users.

This can be enabled by setting the configuration property phoenix.queryserver.withRemoteUserExtractor to true. The URL of the Query Server can be modified to include the required request parameter. For example, to let “bob” to run a query as “alice”, the following JDBC URL could be used:


The standard Hadoop “proxyuser” configuration keys are checked to validate if the “real” remote user is allowed to impersonate the “doAs” user. See the Hadoop documentation for more information on how to configure these rules.

As a word of warning: there is no end-to-end test coverage for the HBase 0.98 and 1.1 Phoenix releases because of missing test-related code in those HBase releases. While we expect no issues on these Phoenix release lines, we recommend additional testing by the user to verify that there are no issues.


Server components are spread across a number of java packages, so effective logging configuration requires updating multiple packages. The default server logging configuration sets the following log levels:

As of the time of this writing, the underlying Avatica component respects the following configuration options. They are exposed via hbase-site.xml configuration.

Configurations relating to the server instantiation.
Property Description Default
phoenix.queryserver.http.port Specifies a port the server will listen on. Default is 8765. 8765
phoenix.queryserver.metafactory.class The Avatica Meta.Factory class to instantiate. org.apache.phoenix.queryserver.server.PhoenixMetaFactoryImpl
phoenix.queryserver.serialization The transport/serialization format, either PROTOBUF or JSON. PROTOBUF
Configurations relating to server connecting to a secure cluster.
Property Description Default When set to "kerberos", the server will attempt to log in before initiating Phoenix connections. Specified hbase-default.xml
phoenix.queryserver.keytab.file The key to look for keytab file. unset
phoenix.queryserver.kerberos.principal The kerberos principal to use when authenticating. unset
phoenix.queryserver.dns.nameserver The DNS hostname default
phoenix.queryserver.dns.interface The name of the network interface to query for DNS. default
Configurations relating to the server connection cache.
Property Description Default
avatica.connectioncache.concurrency Connection cache concurrency level. Default is 10. 10
avatica.connectioncache.initialcapacity Connection cache initial capacity. Default is 100. 100
avatica.connectioncache.maxcapacity Connection cache maximum capacity. Approaching this point, the cache will start to evict least recently used connection objects. Default is 1000. 1000
avatica.connectioncache.expiryduration Connection cache expiration duration. Any connections older than this value will be discarded. Default is 10 minutes. 10
avatica.connectioncache.expiryunit Connection cache expiration unit. Unit modifier applied to the value provided in avatica.connectioncache.expiryunit. Default is minutes. MINUTES
JDBC URL options.
url The URL of the QueryServer. null
serialization The client-side analogy to phoenix.queryserver.serialization in the server to control how the client should serialize data to send to the QueryServer. PROTOBUF
timeZone The time zone the JDBC connection should use. If not provided, the time zone will be specified by JVM. The value of this should be parseable by Java's TimeZone class. null
Configurations relating to the server statement cache.
Property Description Default
avatica.statementcache.concurrency Statement cache concurrency level. Default is 100. 100
avatica.statementcache.initialcapacity Statement cache initial capacity. Default is 1000. 1000
avatica.statementcache.maxcapacity Statement cache maximum capacity. Approaching this point, the cache will start to evict least recently used statement objects. Default is 10000. 10000
avatica.statementcache.expiryduration Statement cache expiration duration. Any statements older than this value will be discarded. Default is 5 minutes. 5
avatica.statementcache.expiryunit Statement cache expiration unit. Unit modifier applied to the value provided in avatica.statementcache.expiryunit. Default is minutes. MINUTES
Configurations relating to impersonation.
Property Description Default
phoenix.queryserver.withRemoteUserExtractor Boolean which controls if a remote user to impersonate should be extracted from the HTTP request parameter made by that user instead of the HTTP-authenticated user name (which is the default). false
phoenix.queryserver.remoteUserExtractor.param The name of the HTTP request parameter to use to extract the user name to execute the query as. doAs

Query Server Additions

The Phoenix Query Server is meant to be horizontally scalable which means that it is a natural fit add-on features like service discovery and load balancing.

Load balancing

The Query Server can use off-the-shelf HTTP load balancers such as the Apache HTTP Server, nginx, or HAProxy. The primary requirement of using these load balancers is that the implementation must implement “sticky session” (when a client communicates with a backend server, that client continues to talk to that backend server). The Query Server also provides some bundled functionality for load balancing using ZooKeeper.

The ZooKeeper-based load balancer functions by automatically registering PQS instances in ZooKeeper and then allows clients to query the list of available servers. This implementation, unlike the others mentioned above, requires that client use the advertised information to make a routing decision. In this regard, this ZooKeeper-based approach is more akin to a service-discovery layer than a traditional load balancer. This load balancer implementation does not support SASL-based (Kerberos) ACLs in ZooKeeper (see PHOENIX-4085).

The following are configuration properties used to configure this load balancer:

Configurations relating to the ZooKeeper-based load balancer.
Property Description Default
phoenix.queryserver.loadbalancer.enabled Should PQS register itself in ZooKeeper for the load balancer. false
phoenix.queryserver.base.path Root znode the PQS instance should register itself to. /phoenix A unique name to identify this PQS instance from others. queryserver
phoenix.queryserver.zookeeper.acl.username Name to set for a DIGEST ZooKeeper ACL, optional. phoenix
phoenix.queryserver.zookeeper.acl.password Password to set for a DIGEST ZooKeeper ACL, optional. phoenix

Back to top