Jul
17

SolrCloud Terminology

Posted on July 17, 2013 by Ryan Tabora

What is SolrCloud?

While the ability to shard a logical Solr index is an excellent feature, managing it can prove to be quite challenging. Managing a distributed Solr cluster pre-SolrCloud meant facing a number of complex challenges:

* Maintaining a consistent view of the index – Updates and deletes must be directed to the correct shard so that there is only one version of a given document.

* Automating Failure Recovery – If a shard goes down that portion of the index is gone and needs to be manually brought up with a backup.

* Cluster configuration – Managing the schema.xml and solrconfig.xml files in a distributed environment can be a pain. Users would be required to learn and integrate something like Puppet, Chef, or their own custom configuration deployment utilities to manage files across a cluster.

* Durability – Guaranteeing that a write is durable is difficult in a distributed environment. What happens if a document indexed to a shard never hits disk before the shard goes down? Users are left to develop their own solutions to provide durability.

* Querying the Cluster – If a shard goes down applications querying Solr must be notified and their shardlists much change.

SolrCloud addresses all of these issues and more by building on top of the already existing distributed Solr features:

* Built In Partitioning – The user simply defines a number of shards for the cluster and Solr manages the shard partitioning scheme. Updates and deletes are always sent to the correct shard.

* Transaction Log for Durability – A transaction log is written to disk for write durability.

* Centralized Configuration – Zookeeper manages cluster configuration so that when you want to start a shard in the cluster you can simply point it to a configuration in Zookeeper.

* Replication and Automatic Failover – SolrCloud supports shard replicas and automatic failover for all nodes in the cluster.

Shards, Cores, and Collections Oh My!

The first challenge users typically face when trying out SolrCloud is simply getting the terminology correct. Introducing terms like collections and replicas while reusing terms like cores and shards can confuse anyone. This diagram should help set the record straight as well as help you understand the architecture of SolrCloud.



The main term SolrCloud introduces is the Collection. The term Collection is essentially a synonym for logical index. Collections will each have their own unique schema.xml to define the types for their index. Generally, users will create multiple Collections to separate logical units of data that will not be intermingled, similar to a database in the relational world. Collections are generally isolated from one another and do not typically communicate with each other (while they can). In our diagram, we have three Collections: Books, People, and Sites. We can see that these Collections are hosted on a number of Solr Instances (the long vertical rectangles behind everything in the picture), each of which is defined as a Solr process contained in a JVM. Each Solr Instance could be run on a different computer in a cluster (however you can run multiple instances of Solr on a single machine), but for simplicity we will say that each Solr Instance in this diagram is being run on a separate machine in the cluster.

Collections are made of one or more SolrCores (abbreviated to Core). A single Core is contained within single Solr Instance and contains a complete Lucene Index. If a Collection is made of or partitioned into many Cores, then each partition of the logical index is called a Shard. As you can see in the diagram, the People Collection has three Shards, with a number of Cores in each of those Shards (we’ll get to the details of the Cores within a Shard in a moment). You might also hear Shards referred to as Slices. The Books Collection has two Shards and the Site Collection has four Shards. The diagram only shows Collections that are partitioned into shards in order to distribute the index across many machines. Keep in mind that if your index will fit onto a single machine you don’t need to shard your index.

Each Shard will contain at least one Leader Core and zero to many Replica Cores. A Replica is a Core that contains a copy of a Leader Core’s index. Replicas are available for failover if their corresponding Leader Core becomes unavailable. Leader Cores are responsible for sending out copies of the SolrDocuments that are sent to it. Replica Cores and other Leader Cores are responsible for forwarding the SolrDocument to the proper Leader Core.

If the index of a Collection is partitioned, then a Leader and its Replicas are defined as a Shard. In the diagram we can see that the People Collection is broken into three Shards, which have a varying number of Cores within them. The first Shard of the People Collection has a single Leader Core and a single Replica Core. We can see that the Leader Core is sending any SolrDocuments sent to it to its Replica Core. The second and third Shards have three Cores each, one Leader and two Replicas. We can see that the client has sent a SolrDocument to the Replica Core of the first Shard and that Shard is routing the document to the proper Leader Core in the third Shard.

The Book Collection is split into two Shards each with two Cores within them. Here the Leader Core of the second Shard has catastrophically failed. The former Replica of the second Shard is converting to a Leader Core automatically. Now any SolrDocuments to be indexed to the second Shard of the Book Collection will now be sent to the new Leader Core. Admins can bring the dead Core in the meantime. Also note that the Leader core (actually all Leader cores even though it is not depicted in the diagram) are communicating with Zookeeper. Zookeeper helps manage cluster state, it knows which Cores are Leaders and which Cores have stopped responding. SolrCloud uses this to automatically handle Core failures. More details on the Zookeeper integration will be covered in the following Zookeeper section.

Finally, we can see the Site Collection which is split into four Shards, each with a single Core within them. Since there are no Replica cores, if one of those Leaders fails then a portion of the index will be gone.

Hopefully this diagram has helped clear up some of the terminology for SolrCloud, if you have any questions feel free to reach out!

Ryan Tabora
Senior Data Engineer
Think Big

Share Button



11 Comments
  1. Hòa Thuận says:

    Thanks Ryan, it helps me a lot :)

  2. Vats says:

    Very nice diagram

  3. kim says:

    Thank you Ryan~ What a Wonderful Text!

  4. Beautiful article ! Solr cloud made easy.

  5. Srinivas M says:

    Very concise and very well articulated. Helped me understand SolrCloud faster.

  6. excellent excellent diagram! I have been trying to make sense of solr architecture the whole day… this is easily the best article to give that higher level picture…

  7. Priyaranjan says:

    Awesome!!! explanation

  8. Rude says:

    Hmmm… you write, that a Collection is “similar to a database in the relational world”, and in your example you “have three Collections: Books, People, and Sites”.

    In a relational world, “Books, People, and Sites” would usually be examples for different tables in the same database, not different databases.

    Or did I miss anything? What is it that I would separate by using multiple Collections?

  9. Suresh says:

    Best explaination ever..Thanks

  10. Sushil says:

    Very clear & precise

Leave a Reply