Secondary indexes are an orthogonal way to access data from its primary access path. In HBase, you have a single index that is lexicographically sorted on the primary row key. Access to records in any way other than through the primary row requires scanning over potentially all the rows in the table to test them against your filter. With secondary indexing, the columns or expressions you index form an alternate row key to allow point lookups and range scans along this new axis.

Covered Indexes

Phoenix is particularly powerful in that we provide covered indexes - we do not need to go back to the primary table once we have found the index entry. Instead, we bundle the data we care about right in the index rows, saving read-time overhead.

For example, the following would create an index on the v1 and v2 columns and include the v3 column in the index as well to prevent having to get it from the data table:

CREATE INDEX my_index ON my_table (v1,v2) INCLUDE(v3)

Functional Indexes

Functional indexes (available in 4.3 and above) allow you to create an index not just on columns, but on an arbitrary expressions. Then when a query uses that expression, the index may be used to retrieve the results instead of the data table. For example, you could create an index on UPPER(FIRST_NAME||‘ ’||LAST_NAME) to allow you to do case insensitive searches on the combined first name and last name of a person.

For example, the following would create this functional index:


With this index in place, when the following query is issued, the index would be used instead of the data table to retrieve the results:


Phoenix supports two types of indexing techniques: global and local indexing. Each are useful in different scenarios and have their own failure profiles and performance characteristics.

Global Indexes

Global indexing targets read heavy uses cases. With global indexes, all the performance penalties for indexes occur at write time. We intercept the data table updates on write (DELETE, UPSERT VALUES and UPSERT SELECT), build the index update and then sent any necessary updates to all interested index tables. At read time, Phoenix will select the index table to use that will produce the fastest query time and directly scan it just like any other HBase table. By default, unless hinted, an index will not be used for a query that references a column that isn’t part of the index.

Local Indexes

Local indexing targets write heavy, space constrained use cases. Just like with global indexes, Phoenix will automatically select whether or not to use a local index at query-time. With local indexes, index data and table data co-reside on same server preventing any network overhead during writes. Local indexes can be used even when the query isn’t fully covered (i.e. Phoenix automatically retrieve the columns not in the index through point gets against the data table). Unlike global indexes, all local indexes of a table are stored in a single, separate shared table prior to 4.8.0 version. From 4.8.0 onwards we are storing all local index data in the separate shadow column families in the same data table. At read time when the local index is used, every region must be examined for the data as the exact region location of index data cannot be predetermined. Thus some overhead occurs at read-time.

Index Population

By default, when an index is created, it is populated synchronously during the CREATE INDEX call. This may not be feasible depending on the current size of the data table. As of 4.5, initially population of an index may be done asynchronously by including the ASYNC keyword in the index creation DDL statement:

CREATE INDEX async_index ON my_schema.my_table (v) ASYNC

The map reduce job that populates the index table must be kicked off separately through the HBase command line like this:

${HBASE_HOME}/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool
  --schema MY_SCHEMA --data-table MY_TABLE --index-table ASYNC_IDX
  --output-path ASYNC_IDX_HFILES

Only when the map reduce job is complete will the index be activated and start to be used in queries. The job is resilient to the client being exited. The output-path option is used to specify a HDFS directory that is used for writing HFiles to.

Index Usage

Indexes are automatically used by Phoenix to service a query when it’s determined more efficient to do so. However, a global index will not be used unless all of the columns referenced in the query are contained in the index. For example, the following query would not use the index, because v2 is referenced in the query but not included in the index:

SELECT v2 FROM my_table WHERE v1 = 'foo'

There are three means of getting an index to be used in this case:

  1. Create a covered index by including v2 in the index:

    CREATE INDEX my_index ON my_table (v1) INCLUDE (v2)

    This will cause the v2 column value to be copied into the index and kept in synch as it changes. This will obviously increase the size of the index.

  2. Hint the query to force it to use the index:

    SELECT /*+ INDEX(my_table my_index) */ v2 FROM my_table WHERE v1 = 'foo'

    This will cause each data row to be retrieved when the index is traversed to find the missing v2 column value. This hint should only be used if you know that the index has good selective (i.e. a small number of table rows have a value of ‘foo’ in this example), as otherwise you’ll get better performance by the default behavior of doing a full table scan.

  3. Create a local index:

    CREATE LOCAL INDEX my_index ON my_table (v1)

    Unlike global indexes, local indexes will use an index even when all columns referenced in the query are not contained in the index. This is done by default for local indexes because we know that the table and index data coreside on the same region server thus ensuring the lookup is local.

Index Removal

To drop an index, you’d issue the following statement:

DROP INDEX my_index ON my_table

If an indexed column is dropped in the data table, the index will automatically be dropped. In addition, if a covered column is dropped in the data table, it will be automatically dropped from the index as well.

Index Properties

Just like with the CREATE TABLE statement, the CREATE INDEX statement may pass through properties to apply to the underlying HBase table, including the ability to salt it:

CREATE INDEX my_index ON my_table (v2 DESC, v1) INCLUDE (v3)

Note that if the primary table is salted, then the index is automatically salted in the same way for global indexes. In addition, the MAX_FILESIZE for the index is adjusted down, relative to the size of the primary versus index table. For more on salting see here. With local indexes, on the other hand, specifying SALT_BUCKETS is not allowed.

Consistency Guarantees

On successful return to the client after a commit, all data is guaranteed to be written to all interested indexes and the primary table. In other words, index updates are synchronous with the same strong consistency guarantees provided by HBase.

However, since indexes are stored in separate tables than the data table, depending on the properties of the table and the type of index, the consistency between your table and index varies in the event that a commit fails due to a server-side crash. This is an important design consideration driven by your requirements and use case.

Outlined below are the different options with various levels of consistency guarantees.

Transactional Tables

By declaring your table as transactional, you achieve the highest level of consistency guarantee between your table and index. In this case, your commit of your table mutations and related index updates are atomic with strong ACID guarantees. If the commit fails, then none of your data (table or index) is updated, thus ensuring that your table and index are always in sync.

Why not just always declare your tables as transactional? This may be fine, especially if your table is declared as immutable, since the transactional overhead is very small in this case. However, if your data is mutable, make sure that the overhead associated with the conflict detection that occurs with transactional tables and the operational overhead of running the transaction manager is acceptable. Additionally, transactional tables with secondary indexes potentially lowers your availability of being able to write to your data table, as both the data table and its secondary index tables must be availalbe as otherwise the write will fail.

Immutable Tables

For a table in which the data is only written once and never updated in-place, certain optimizations may be made to reduce the write-time overhead for incremental maintenance. This is common with time-series data such as log or event data, where once a row is written, it will never be updated. To take advantage of these optimizations, declare your table as immutable by adding the IMMUTABLE_ROWS=true property to your DDL statement:


All indexes on a table declared with IMMUTABLE_ROWS=true are considered immutable (note that by default, tables are considered mutable). For global immutable indexes, the index is maintained entirely on the client-side with the index table being generated as changes to the data table occur. Local immutable indexes, on the other hand, are maintained on the server-side. Note that no safeguards are in-place to enforce that a table declared as immutable doesn’t actually mutate data (as that would negate the performance gain achieved). If that was to occur, the index would no longer be in sync with the table.

If you have an existing table that you’d like to switch from immutable indexing to mutable indexing, use the ALTER TABLE command as show below:


Index on non transactional, immutable tables have no mechanism in place to automatically deal with a commit failure. Maintaining consistency between the table and index is left to the client to handle. Because the updates are idempotent, the simplest solution is for the client to continue retrying the batch of mutations until they succeed.

Mutable Tables

For non transactional mutable tables, we maintain index update durability by adding the index updates to the Write-Ahead-Log (WAL) entry of the primary table row. Only after the WAL entry is successfully synced to disk do we attempt to make the index/primary table updates. We write the index updates in parallel by default, leading to very high throughput. If the server crashes while we are writing the index updates, we replay the all the index updates to the index tables in the WAL recovery process and rely on the idempotence of the updates to ensure correctness. Therefore, indexes on non transactional mutable tables are only ever a single batch of edits behind the primary table.

It’s important to note several points:

  • For non transactional tables, you could see the index table out of sync with the primary table.
  • As noted above, this is ok as we are only a very small bit behind and out of sync for very short periods
  • Each data row and its index row(s) are guaranteed to to be written or lost - we never see partial updates as this is part of the atomicity guarantees of HBase.
  • Data is first written to the table followed by the index tables (the reverse is true if the WAL is disabled).

Singular Write Path

There is a single write path that guarantees the failure properties. All writes to the HRegion get intercepted by our coprocessor. We then build the index updates based on the pending update (or updates, in the case of the batch). These update are then appended to the WAL entry for the original update.

If we get any failure up to this point, we return the failure to the client and no data is persisted or made visible to the client.

Once the WAL is written, we ensure that the index and primary table data will become visible, even in the case of a failure.

  • If the server does crash, we then replay the index updates with the usual WAL replay mechanism
  • If the server does not crash, we just insert the index updates to their respective tables.
    • If the index updates fail, the various means of maintaining consistency are outlined below.
    • If the Phoenix system catalog table cannot be reached when a failure occurs, we force the server to be immediately aborted and failing this, call System.exit on the JVM, forcing the server to die. By killing the server, we ensure that the WAL will be replayed on recovery, replaying the index updates to their appropriate tables. This ensures that a secondary index is not continued to be used when it’s in a know, invalid state.

Disallow table writes until mutable index is consistent

The highest level of maintaining consistency between your non transactional table and index is to declare that writes to the data table should be temporarily disallowed in the event of a failure to update the index. In this consistency mode, the table and index will be held at the timestamp before the failure occurred, with writes to the data table being disallowed until the index is back online and in-sync with the data table. The index will remain active and continue to be used by queries as usual.

The following server-side configurations control this behavior:

  • phoenix.index.failure.block.write must be true to enable a writes to the data table to fail in the event of a commit failure until the index can be caught up with the data table.
  • phoenix.index.failure.handling.rebuild must be true (the default) to enable a mutable index to be rebuilt in the background in the event of a commit failure.

Disable mutable indexes on write failure until consistency restored

The default behavior with mutable indexes is to mark the index as disabled if a write to them fails at commit time, partially rebuild them in the background, and then mark them as active again once consistency is restored. In this consistency mode, writes to the data table will not be blocked while the secondary index is being rebuilt. However, the secondary index will not be used by queries while the rebuild is happening.

The following server-side configurations control this behavior:

  • phoenix.index.failure.handling.rebuild must be true (the default) to enable a mutable index to be rebuilt in the background in the event of a commit failure.
  • phoenix.index.failure.handling.rebuild.interval controls the millisecond frequency at which the server checks whether or not a mutable index needs to be partially rebuilt to catch up with updates to the data table. The default is 10000 or 10 seconds.
  • phoenix.index.failure.handling.rebuild.overlap.time controls how many milliseconds to go back from the timestamp at which the failure occurred to go back when a partial rebuild is performed. The default is 1.

Disable mutable index on write failure with manual rebuild required

This is the lowest level of consistency for mutable secondary indexes. In this case, when a write to a secondary index fails, the index will be marked as disabled with a manual rebuild of the index required to enable it to be used once again by queries.

The following server-side configurations controls this behavior:

  • phoenix.index.failure.handling.rebuild must be set to false to disable a mutable index from being rebuilt in the background in the event of a commit failure.


Non transactional, mutable indexing requires special configuration options on the region server and master to run - Phoenix ensures that they are setup correctly when you enable mutable indexing on the table; if the correct properties are not set, you will not be able to use secondary indexing. After adding these settings to your hbase-site.xml, you’ll need to do a rolling restart of your cluster.

You will need to add the following parameters to hbase-site.xml on each region server:


The above property enables custom WAL edits to be written, ensuring proper writing/replay of the index updates. This codec supports the usual host of WALEdit options, most notably WALEdit compression.

  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>

The above properties prevent deadlocks from occurring during index maintenance for global indexes (HBase 0.98.4+ and Phoenix 4.3.1+ only) by ensuring index updates are processed with a higher priority than data updates. It also prevents deadlocks by ensuring metadata rpc calls are processed with a higher priority than data rpc calls.

From Phoenix 4.8.0 onward, no configuration changes are required to use local indexing. In Phoenix 4.7 and below, the following configuration changes are required to the server-side hbase-site.xml on the master and regions server nodes:


Upgrading Local Indexes created before 4.8.0

While upgrading the Phoenix to 4.8.0+ version at server remove above three local indexing related configurations from hbase-site.xml if present. From client we are supporting both online(while initializing the connection from phoenix client of 4.8.0+ versions) and offline(using psql tool) upgrade of local indexes created before 4.8.0. As part of upgrade we recreate the local indexes in ASYNC mode. After upgrade user need to build the indexes using IndexTool

Following client side configuration used in the upgrade.

  1. phoenix.client.localIndexUpgrade
    • The value of it is true means online upgrade and false means offline upgrade.
    • Default: true

Command to run offline upgrade using psql tool $ psql [zookeeper] -l


Out the box, indexing is pretty fast. However, to optimize for your particular environment and workload, there are several properties you can tune.

All the following parameters must be set in hbase-site.xml - they are true for the entire cluster and all index tables, as well as across all regions on the same server (so, for instance, a single server would not write to too many different index tables at once).

  1. index.builder.threads.max
    • Number of threads to used to build the index update from the primary table update
    • Increasing this value overcomes the bottleneck of reading the current row state from the underlying HRegion. Tuning this value too high will just bottleneck at the HRegion as it will not be able to handle too many concurrent scan requests as well as general thread-swapping concerns.
    • Default: 10
  2. index.builder.threads.keepalivetime
    • Amount of time in seconds after we expire threads in the builder thread pool.
    • Unused threads are immediately released after this amount of time and not core threads are retained (though this last is a small concern as tables are expected to sustain a fairly constant write load), but simultaneously allows us to drop threads if we are not seeing the expected load.
    • Default: 60
  3. index.writer.threads.max
    • Number of threads to use when writing to the target index tables.
    • The first level of parallelization, on a per-table basis - it should roughly correspond to the number of index tables
    • Default: 10
  4. index.writer.threads.keepalivetime
    • Amount of time in seconds after we expire threads in the writer thread pool.
    • Unused threads are immediately released after this amount of time and not core threads are retained (though this last is a small concern as tables are expected to sustain a fairly constant write load), but simultaneously allows us to drop threads if we are not seeing the expected load.
    • Default: 60
  5. hbase.htable.threads.max
    • Number of threads each index HTable can use for writes.
    • Increasing this allows more concurrent index updates (for instance across batches), leading to high overall throughput.
    • Default: 2,147,483,647
  6. hbase.htable.threads.keepalivetime
    • Amount of time in seconds after we expire threads in the HTable’s thread pool.
    • Using the “direct handoff” approach, new threads will only be created if it is necessary and will grow unbounded. This could be bad but HTables only create as many Runnables as there are region servers; therefore, it also scales when new region servers are added.
    • Default: 60
  7. index.tablefactory.cache.size
    • Number of index HTables we should keep in cache.
    • Increasing this number ensures that we do not need to recreate an HTable for each attempt to write to an index table. Conversely, you could see memory pressure if this value is set too high.
    • Default: 10
  8. org.apache.phoenix.regionserver.index.priority.min
    • Value to specify to bottom (inclusive) of the range in which index priority may lie.
    • Default: 1000
  9. org.apache.phoenix.regionserver.index.priority.max
    • Value to specify to top (exclusive) of the range in which index priority may lie.
    • Higher priorites within the index min/max range do not means updates are processed sooner.
    • Default: 1050
  10. org.apache.phoenix.regionserver.index.handler.count
    • Number of threads to use when serving index write requests for global index maintenance.
    • Though the actual number of threads is dictated by the Max(number of call queues, handler count), where the number of call queues is determined by standard HBase configuration. To further tune the queues, you can adjust the standard rpc queue length parameters (currently, there are no special knobs for the index queues), specifically ipc.server.max.callqueue.length and ipc.server.callqueue.handler.factor. See the HBase Reference Guide for more details.
    • Default: 30


We track secondary index performance via our performance framework. This is a generic test of performance based on defaults - your results will vary based on hardware specs as well as you individual configuration.

That said, we have seen secondary indexing (both immutable and mutable) go as quickly as < 2x the regular write path on a small, (3 node) desktop-based cluster. This is actually pretty reasonable as we have to write to multiple tables as well as build the index update.


There have been several presentations given on how secondary indexing works in Phoenix that have a more in-depth look at how indexing works (with pretty pictures!):

Back to top