Sunday, April 9, 2017

Setting Up JEE 7 REST Web Application on Glassfish 4.1.1/4.1.2

Setting up a JEE web application in any of the many IDE's is typically pretty straightforward, but because of some issues with the glassfish libraries when using Jersey with Jackson, it turned out to be a little more challenging than expected.  Luckily, others have encountered the issue as well so it too difficult to figure out, however the answer also wasn't 100% straightforward so I decided to document the steps here.

This post will guide you through creating a JEE 7 web application that will contain a simple REST interface.  We'll be using IntelliJ IDEA Ultimate 2017.1, JEE 7, Java 8, Maven 3, and Glassfish 4.1.2 (same steps worked for 4.1.1).

Install Java 8.  There are plenty of guides out there on how to do this for the various platforms, so I will not cover it here.

Unzip the distribution and Glassfish is ready to go.

In IDEA, click File->New->Project and select Maven as the project type.  Ensure the Project SDK is set to Java 8, then check the Create from archetype checkbox and select org.apache.maven.archetypes:maven-archetype-webapp.

Click Next and fill in the GroupIdArtifactId, and Version fields, then click Next again twice.  Fill in a Project name and click Finish.

Since we're using Java 8, we need to tell IDEA and Maven to compile for Java 8.  Go to File->Settings->Build, Execution, Deployment->Java Compiler and for Project bytecode version select 1.8.

Also, go to File->Settings->Build, Execution, Deployment->Build Tools->Maven->Runner and ensure 1.8 is selected as the JRE version.

We'll need to setup the dependencies for Glassfish, Jersey, and Jackson in our pom.xml.  Add the dependencies, etc as shown below.


IDEA will probably prompt you that changes were made to the pom.xml and you can have it automatically perform the import of the changes.  Go ahead and do this.

Inside of src/main/webapp/WEB-INF/web.xml add the following to setup Jersey and set out REST context to /webapi:

 <!DOCTYPE web-app PUBLIC  
  "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"  
  "" >  
   <servlet-name>Books REST Example Application</servlet-name>  
   <servlet-name>Books REST Example Application</servlet-name>  

Next we need to make sure all of the libraries required for Jersey, Jackson, etc are included with the war when it is built.  So go to File->Project Structure->Artifacts-><project name>:war exploded.  Under the Output Layout tab, expand WEB-INF/lib.  Then right click it and select Add Copy of->Library Files.  Select all of the libraries shown under Project Libraries.  Then click OK.  They should now appear under the lib folder in the Output Layout tab.

Click OK.

Next we need to create the REST resource, a model representing the resource, and a helper class that will contain our fake book data.

First create a java directory under src/main.  Right click it, then select Mark Directory as->Sources root so that IDEA treats the files there as source files.

Create under src/main/java/com/sit/rest.




 public class BookResource {  
   public Response getBook(@PathParam("id") int id) {  
     Book b = BookManager.getBookWithId(id);  
     if (b != null) {  
       return Response.status(Response.Status.OK).entity(b).build();  
     } else {  
       return Response.status(Response.Status.NOT_FOUND).build();  

   public Response getBooks() {  
     return Response.status(Response.Status.OK).entity(BookManager.getAllBooks()).build();  

Create under src/main/java/com/sit/rest/model.  It is important to note that you must have a parameter-less constructor.

 public class Book {  
   private int id;  
   private String title;  
   private String author; 
   public Book() {  

   public int getId() {  
     return id;  

   public void setId(int id) { = id;  

   public String getTitle() {  
     return title;  

   public void setTitle(String title) {  
     this.title = title;  

   public String getAuthor() {  
     return author;  

   public void setAuthor(String author) { = author;  

   public String toString() {  
     return "Book{" +  
         "id='" + id + '\'' +  
         ", title='" + title + '\'' +  
         ", author='" + author + '\'' +  

Finally, create under src/main/java/com/sit/rest.


 import java.util.ArrayList;  
 import java.util.List;  
 import java.util.Optional; 
 public class BookManager {  
   private static List<Book> books = new ArrayList<>();  

   static {  
     Book b = new Book();  
     b.setAuthor("Charles Dickens");  
     b.setTitle("Great Expectations");  
     b = new Book();  
     b.setAuthor("Mark Twain");  
     b.setTitle("Tom Sawyer");  

   static Book getBookWithId(int id) {  
     Optional<Book> book = -> b.getId() == id).findFirst();  
     return book.orElse(null);  

   static List<Book> getAllBooks() {  
     return books;  

Next, we'll modify index.jsp located under src/main/webapp to add a couple links to our REST resource.

 <p><a href="webapi/books">All Books</a>  
 <p><a href="webapi/books/1">Great Expectations</a>  
 <p><a href="webapi/books/2">Tom Sawyer</a>  

In order to run everything we need to setup IDEA to deploy to Glassfish by creating a configuration for it.  Click Run->Edit Configurations.  Click the "+" button and select Glassfish Server->Local.  Set the name to something like "Glassfish 4.1.1".  On the Server tab click Configure.  Click the "+" button to add a new Glassfish Server.  Choose the directory where you unzipped Glassfish at the beginning of this tutorial, then click OK.  For Server Domain, choose domain1.  On the Deployment tab click the "+" button and then select Artifact to add a deployment artifact.  Choose the "exploded" version.  Then click OK to complete the configuration.

You can now run the server and deploy the application by clicking the green Play button next to the configuration name in the upper right of the IDE.  Wait until the output shows that the artifact was deployed and it should also open a browser window showing the contents of the index.jsp we edited.

At this point you should get an internal server error that I will explain how to resolve next.

There are two issues discussed in the following two JIRA issues:

The first issue (21440) involves modifying the MANIFEST.MF file in the org.eclipse.persistence.moxy.jar file in the Glassfish modules directory and adding org.xml.sax.helpers,javax.xml.parsers, javax.naming to the end of the Import-Package: block.  Modify it and save the jar.

The second issue deals with an error that occurs during the first request to a resource, but is not present on subsequent requests.  To resolve this issue, a different version of the JAXB Annotations module needs to be saved to the Glassfish modules directory in <glassfish_home>/glassfish/modules.  Download the jar at the following link and place it into the modules directory.

In addition, we must update web.xml to add com.fasterxml.jackson.jaxrs.json to the param-value in the init-param for the servlet.

 <!DOCTYPE web-app PUBLIC  
  "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"  
  "" >  
   <servlet-name>Books REST Example Application</servlet-name>  
    <param-value>, com.fasterxml.jackson.jaxrs.json</param-value>  
   <servlet-name>Books REST Example Application</servlet-name>  

Stop the Glassfish server by going to the Run tab and clicking the red Stop button.  Then, clear out the Glassfish osgi-cache by deleting everything in the directory located at:


On the Run tab within IDEA, click the Play button to the left of the Server tab to start the Glassfish server.

Now the links in index.jsp should work as expected and return a JSON formatted response.


The code for this post can be found at

Sunday, October 16, 2016

Testing Golang code that uses exec.Command

At some point you may need to test code that calls exec.Command, but do not want the command to actually run. There are other posts that describe how to do this in Go such as Nate Finch's excellent write-up at Before reading this post, I highly recommend you read Nate's post to get familiar with the mechanisms that allow the code in this post to work. His write-up is a simple example of how to get something to work, however I was looking for something more flexible that could be used in any test without having to reproduce a bunch of boilerplate code and to allow mocked responses for multiple calls to the same command within a test.  So I decided to write a utility to help with those very pieces.

First, to help keep things organized let's define a struct to hold some details about how we want the command that is executed to behave.

type ExecCmdTestResult struct {
 command  string
 exitCode int
 stdOut   string
 stdErr   string

The command parameter will hold the actual command that we want to mock a response for. The exitCode is the exit code we want the process to exit with. The stdOut and stdErr strings hold what we want to be mocked to stdout and stderr of the command process.

We will also create the following structure which allows us to hold the mocked responses and to know which test function to call when we want to execute the mocked command.

type ExecCmdTestHelper struct {
 testResults        map[string][]ExecCmdTestResult
 testHelperFuncName string

For convenience, we create a "New" function to create the helper.

func NewExecCmdTestHelper(testHelperFuncName string) *ExecCmdTestHelper {
 return &ExecCmdTestHelper{
  testResults:        make(map[string][]ExecCmdTestResult),
  testHelperFuncName: testHelperFuncName,

Next, we need a way to add mocked results to the helper, so we define a function that let's us add a mocked results one-by-one.

func (e *ExecCmdTestHelper) AddExecResult(stdOut, stdErr string, exitCode int, command ...string) {
 fullCommand := strings.Join(command, " ")
 base64Command := base64.StdEncoding.EncodeToString([]byte(fullCommand))

 result := ExecCmdTestResult{
  stdOut:   stdOut,
  stdErr:   stdErr,
  exitCode: exitCode,
  command:  fullCommand,

 if e.testResults[base64Command] == nil {
  e.testResults[base64Command] = make([]ExecCmdTestResult, 0)

 e.testResults[base64Command] = append(e.testResults[base64Command], result)

The function above creates an ExecCmdTestResult instance and adds it to the list of results.

The helper will also provide a function that will be used as the stand-in for exec.Command. We'll call it ExecCommand which is shown below.

func (m *ExecCmdTestHelper) ExecCommand(command string, args ...string) *exec.Cmd {
 cs := []string{"" + m.testHelperFuncName, "--", command}
 cs = append(cs, args...)
 cmd := exec.Command(os.Args[0], cs...)

 fullCommand := command

 if len(args) > 0 {
  fullCommand = command + " " + strings.Join(args, " ")

 base64Command := base64.StdEncoding.EncodeToString([]byte(fullCommand))

 if len(m.testResults[base64Command]) == 0 {
  fmt.Println("No result was setup for command: ", fullCommand)
  return nil

 // Retrieve next result
 mockResults := m.testResults[base64Command][0]

 // Remove current result so that next time it will use next result that was setup.  If no next result, re-use same result.
 if len(m.testResults[base64Command]) > 1 {
  m.testResults[base64Command] = m.testResults[base64Command][1:]

 stdout := execTestStdOutputKey + "=" + mockResults.stdOut
 stderr := execTestStdErrorKey + "=" + mockResults.stdErr
 exitCode := execTestExitCodeKey + "=" + strconv.FormatInt(int64(mockResults.exitCode), 10)

 cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1", stdout, stderr, exitCode}

 return cmd

The above function will take the responses you set on the helper and cycle through them for each call to the specified command. The responses are stored in the command's environment so that they can be retrieved when the function that mocks the responses can use them to provide the desired response. If the command is called more times than the number of results that were mocked for that command, the last mocked result for all subsequent calls to that same command will be used. For example, if there was only one result mocked with an exit code of "0" and we call the same command twice, the second time the command is called, it will also receive an exit code of "0". If no mocked result was found for the given command, then it return nil.

Within your code you will need to replace the calls to exec.Command with a variable that contains a function that has the function signature like ExecCommand above. This way, in production you can set the variable to exec.Command, but for tests it can be set to the helper's ExecCommand function. In your tests, instead of exec.Command, the above function will be called which in turn will execute the test with the name given (testHelperFuncName). Within your test file you must add a function with that name which calls another new function named RunTestExecCmd which will mock stdout, stderr, and the exit code.

Add this to your test file:
func TestHelperProcess(t *testing.T) {

Function that mocks process response:

func RunTestExecCmd() {
 if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {

 stdout := os.Getenv(execTestStdOutputKey)
 stderr := os.Getenv(execTestStdErrorKey)
 exitCode, err := strconv.ParseInt(os.Getenv(execTestExitCodeKey), 10, 64)

 if err != nil {

 fmt.Fprintf(os.Stdout, stdout)
 fmt.Fprintf(os.Stderr, stderr)


The function above retrieves the values that were set by the ExecCommand function from the command's environment and uses them to create the proper stdout and stderr strings and exit with the proper code.

All of the code above, along with an example of how to use it can be found at the following gist:

Hopefully you can now more conveniently test code that calls exec.Command!

Tuesday, June 23, 2015

Setting Up SolrCloud in Solr 5.x

While there is a lot of documentation on the Solr Confluence Wiki, it may be challenging to find all of the right levers to pull in order to start a multi node SolrCloud instance without using the provided script which creates an example SolrCloud for you via a command line wizard.  This post is intended to be a step-by-step guide to manually create a SolrCloud cluster without the user of the example script.

Installing a Zookeeper Ensemble

In order for your Solr instances to automatically receive configuration and participate in the cluster, you need to install a Zookeeper Ensemble.  It is possible to have only one Zookeeper instance to run your SolrCloud, however it is recommended to have at least 3 instances.  Why three and not two?  Zookeeper requires a quorum to be considered up and running, so if you have two instances and one goes down, one instance up and one down would not be a quorum.  If you have three instances, then if one goes down you will still have two out of three running and it will still be a quorum and thus keep running.

Let's get started.

Create a directory on the server named "solrcloud".  We'll refer to this as <BASE_INSTALL_DIR>.  

First, download Zookeeper from the Apache project website at  At the time of this writing SolrCloud uses version 3.4.6.

Once the distribution is downloaded, unzip/untar to <BASE_INSTALL_DIR >.  A folder named "zookeeper-3.4.6" will be extracted.  We'll refer to this as <ZOOKEEPER_HOME> from now on.

We'll be creating three ZooKeeper instances, each will need a data directory.  In the <BASE_INSTALL_DIR> create a directory named "zdata".  Under the zdata directory, we'll create a directory for each instance named "1", "2", and "3".  In a production environment, each ZooKeeper instance would be on a different server so you would just create a data directory in the location on your server that makes sense for you, however to keep things simple we'll be creating three instances on one machine and thus we need to create all three data directories on the same machine.

Within each of the data directories, a file named "myid" must be created.  The only piece that needs to go in the "myid" file is the instance name.  For now, we'll just use "1", "2", and "3" as the id's for the ZooKeeper instances.  So add a "myid" file to each of the instance data directories and add an id to each.

Your directory structure should look like the following:

With Zookeeper on a single machine, it is not necessary to create multiple directories in order to have multiple instances.  You just need to create a ZooKeeper configuration for each instance which ends with the id of the instance.  To create the configuration files, go to <ZOOKEEPER_HOME>/conf.  Copy zoo_sample.cfg and name the new configuration file zoo.cfg for instance 1, zoo2.cfg for instance 2, and zoo3.cfg for instance 3.  Open the config files and update the "clientPort" property in each file.  For the purposes of this guide, we'll increment the port number by one for each, but in production they will be on different servers so you can either leave the default port "2181" or you can change it to the port you wish to use.  You will also need to configure the ports that the ZooKeeper instances communicate with each other on.

The configuration file for instance 1 should look similar to the following:



The only differences in the configuration files for the other two Zookeeper instances should be the "clientPort" values and the "dataDir" values which should be set to "2182" for instance 2 and "2183" for instance 3 and the data directories we specified previously for each instance.

Your ZooKeeper install should contain the configuration files as shown below.

Now you are ready to start your ZooKeeper Ensemble, but before we do that, let's create a helper script to start them all without having to type the startup command for each instance every time.

In your <BASE_INSTALL_DIR>, create a file named "".  In the file add the following:

cd ./zookeeper-3.4.6
bin/ start zoo.cfg
bin/ start zoo2.cfg
bin/ start zoo3.cfg

Ensure you give the script execute permission, then run it from the command line.  You should see output similar to the following:

JMX enabled by default
Using config: <BASE_INSTALL_DIR>/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: <BASE_INSTALL_DIR>/zookeeper-3.4.6/bin/../conf/zoo2.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: <BASE_INSTALL_DIR>/zookeeper-3.4.6/bin/../conf/zoo3.cfg
Starting zookeeper ... STARTED

Creating a configset

Before we create the Solr instances, we'll need to create a configset in order to create a collection to shard and replicate across multiple instances.  Creating a configset is very specific to your own collection, so it is out of the scope of this guide to create a configset, however I will add a couple of pointers.

If you use one of the pre-built configsets that come with Solr 5, they are located in solr-5.2.1/server/solr/configsets and you don't have to do anything.  However, if you do roll your own, keep in mind the following:
  • Any paths referenced in your solrconfig.xml must be updated to reflect paths relative to your Solr instance directories (which we will create in a later section).
  • If you have a need for additional jar files such as jdbc drivers, you can add a "lib" directory inside your solr instance collection specific directories and they will automatically be picked up by Solr, so you do not have to modify solrconfig.xml in order to use them.
    • Note: The collection directories will be created by Solr once you create your collection, so you will have to add the lib directory and jars once you have completed "Adding a collection" section later in this guide.  The directory will be at <BASE_INSTALL_DIR>/solr-5.2.1/server/<instance>/<collection>.  Ex. <BASE_INSTALL_DIR>/solr-5.2.1/server/solr/mycollection_shard1_replica1.  Restart the Solr instances once the jars are in place.
  • Ensure the appropriate <lib> tags are added to solrconfig.xml for any libraries you need in addition to the ones that may already be there.  For example, to use the DataImportHandler, you need to add the following lines if they don't already exist:
    •   <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar" />
    •   <lib dir="${solr.install.dir:../../../..}/contrib/dataimporthandler-extras/lib" regex=".*\.jar" />
    •   <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
  • Create/update the schema.xml as necessary to map data from the source to a Solr document.

Uploading a configset to Zookeeper

Note: This section is only relevant if you want to upload your configuration ahead of time instead of specifying the configuration to use in the "create" command used in the "Adding a Collection" section or if you are using the Collections API to issue a "create" command via the REST interface.  When doing the creation of a collection via the REST interface you cannot specify a configset directory like you can using the solr script from the command line.  Feel free to skip this section unless you plan on using the Collections API instead of the bin/solr script to create a collection.

In order for a configset to be used in SolrCloud, it needs to reside within Zookeeper.  Zookeeper uses this configuration to automatically propagate configuration to the Solr instances and create your collection on each instance.

To upload the configset, you will need to use which is in <BASE_INSTALL_DIR>/solr-5.2.1/server/scripts/cloud-scripts.  So go to that directory and issue the following command:

./ -zkhost localhost:2181,localhost:2182,localhost:2183 -cmd upconfig -confname <your conf name> -confdir <BASE_INSTALL_DIR>/solr-5.2.1/sever/solr/configsets/<your conf dir>/conf

The above assumes you have put your configset in the configsets directory, however it doesn't have to be there.  Also, in a production system, you won't be using localhost and the ports may be different, but you'll just need to update the host and ports as necessary for your environment.

After running the command, your configset should be uploaded to Zookeeper, but we don't have a Solr instance up and running yet so we won't be able to check it via the web interface quite yet.

Creating Solr Instances

In a production environment each instance will be on a separate server, so just like the Zookeeper instances they will likely have the same port, but different hosts.  However, for the purposes of this guide, we will create three separate instances on the same machine.  Luckily, this is very easy to do, but not quite as easy as Zookeeper which allows you to just add additional configuration files, but it is almost as easy.  All you need to do is create additional directories that will serve as Solr home directories for each of the instances.  The current Solr home is <BASE_INSTALL_DIR>/solr-5.2.1/server/solr.  So we'll just add 3 additional directories so we can have a total of 4 instances.  You can add as many as necessary, but we'll only add 3 here so that we can demonstrate sharding and replication across four instances.  

Under <BASE_INSTALL_DIR>/solr-5.2.1/server add a directory named "solr2", "solr3", and "solr4" to represent our additional instances.  Copy solr.xml for the original solr home directory and place it into each of the newly created directories.  Then open each file and update the ports.  Updating the ports is only necessary because we are on the same instance and we can't have multiple instances running on the same port.  Use the following port numbers for the purposes of this guide:

Instance 1: 8983
Instance 2: 8984
Instance 3: 8985
Instance 4: 8986

Your directory structure with the new instances should look similar to the following:

That's all you need to do to create additional Solr instances.  Simple right?  Of course they don't do much now since they have no collections configured, but that's what we'll do in a minute.  However, first we need to start up our Solr instances.

Starting Solr Instances

In order to start a Solr instance as part of the cloud and connected with the Zookeeper ensemble, issue the following commands from <BASE_INSTALL_DIR>/solr-5.2.1.

  • bin/solr start -cloud -s server/solr -p 8983 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
  • bin/solr start -cloud -s server/solr2 -p 8984 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
  • bin/solr start -cloud -s server/solr3 -p 8985 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
  • bin/solr start -cloud -s server/solr4 -p 8986 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
As with the Zookeeper instances, you can create a script that contains these commands as well so that you don't have to type them one by one each time you want to start your instances.  Name it something like "" and put it in the <BASE_INSTALL_DIR> and make sure you give it execute permission.

cd solr-5.2.1
bin/solr start -cloud -s server/solr -p 8983 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
bin/solr start -cloud -s server/solr2 -p 8984 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
bin/solr start -cloud -s server/solr3 -p 8985 -z localhost:2181,localhost:2182,localhost:2183 -noprompt
bin/solr start -cloud -s server/solr4 -p 8986 -z localhost:2181,localhost:2182,localhost:2183 -noprompt

Upon successful execution of the startup commands you should see the following output:
Waiting to see Solr listening on port 8983 [/]  
Started Solr server on port 8983 (pid=37286). Happy searching!

Waiting to see Solr listening on port 8984 [/]  
Started Solr server on port 8984 (pid=37386). Happy searching!

Waiting to see Solr listening on port 8985 [/]  
Started Solr server on port 8985 (pid=37489). Happy searching!

Waiting to see Solr listening on port 8986 [/]  
Started Solr server on port 8986 (pid=37591). Happy searching!

Once the instances are up, you can open a web browser and go to the Solr web pages at:

Adding a Collection

First let's verify the configuration we uploaded earlier for our collection is in Zookeeper, otherwise we won't be able to create the collection.  So, fire up a browser and go to http://localhost:8983/solr.

Navigate to the "Cloud" tab and open the "Tree" tab underneath it.  You should see a tree containing the files in your Zookeeper ensemble.  Within that set of files is a directory named "configs".  Open that up and you should see your configuration there.

Now that you have a running Zookeeper ensemble along with four Solr instances, we can easily add your custom Solr collection.  In order to do this, we'll use the solr utility in <BASE_INSTALL_DIR>/solr-5.2.1/bin.  You could also use the CollectionsAPI directly and issue commands via the REST interface running on your Solr instances.  See for more details on the collections API.  Note that when using the Collections API to issue a "create" command, the configuration will already need to be in Zookeeper.  Please refer to the "Uploading a configset to Zookeeper section above on how to upload your configset.

Issue the following command to create your collection:

  • bin/solr create -c <collection name> -d <config directory> -n <config name> -p 8983 -s 2 -rf 2

What the above command does is it creates a collection with the name you specify in the -c argument.  This can be anything you want to name your collection.  The -d argument specifies the config directory where your configset resides.  This looks in <BASE_INSTALL_DIR>/solr-5.2.1/server/solr/configsets for the directory name you specify.  The command will automatically add the config to Zookeeper.  The -n argument specifies the name you wish to give this configuration in Zookeeper.  Name it something meaningful so you can find it in the Solr admin console later on.  The -p option specifies the port of the Solr instance you are creating this collection on.  Since you are using Zookeeper, even though you specify only one of the Solr instances, the collection will be propagated as necessary to the other instances.  The -s specifies the number of shards and the -rf parameter specifies the replication factor or how many copies of the shards you want.  Since this example specifies two shards and two replicas, Zookeeper and Solr will automatically create a primary/leader shard for each shard on a separate instance and a replica of each of those shards on other instances using the four instances we configured without us having to do any work other than creating the collection on one of the Solr instances.

If everything worked, you should see a new directory within each of your solr instance directories with the name of your config followed by shard and replica labels.

  • <collection_name>_shard1_replica1 
  • <collection_name>_shard1_replica2
  • <collection_name>_shard2_replica1
  • <collection_name>_shard2_replica2

The instance that each of these appear in may be different for each installation since we had all of the Solr instances up before we created the collection.  If you want to, you can forego starting all of the Solr instances at once and only bring one up to start with.  If you do this, the first shard will be put on the running instance.  Then you can start the next server and the other half of the first shard will be put on the new instance.  You keep repeating this until they are all up and you will end up with specific portions of the shards on specific servers as well as the replicas.

To view your SolrCloud go to http://localhost:8983/solr/#/~cloud which will show a diagram of all the instances in your cloud.

Stopping Zookeeper and Solr Instances

Stopping the Solr instances is very easy.  Just issue the following command from the <BASE_INSTALL_DIR>/solr-5.2.1 directory:

bin/solr stop -all

If you want to stop a particular instance remove the "-all" argument and supply the "-p" argument and specify the port of the instance you want to stop.

bin/solr stop -p 8984

Stopping Zookeeper instances is also very easy.  From the <BASE_INSTALL_DIR>/zookeeper-3.4.6 directory run the following command:

bin/ stop zoo.cfg

Replace zoo.cfg with the appropriate instance configuration as necessary. i.e. zoo2.cfg or zoo3.cfg

Hopefully this guide was helpful to you.  That's all for now!

Thursday, May 14, 2015

Issue with during Oracle 11g install on CentOS 7

Installing Oracle 11g Enterprise on CentOS 7 didn't go quite as smoothly as planned.  However, by combining knowledge across several articles I was finally able to make it work.

During the install I received an error that wasn't mentioned in the Oracle install procedures or the article I used to guide the install at "Link 1" below.  The error was when the installer was trying to call a target in "".  The message was:

INFO: /lib64/ undefined reference to `memcpy@GLIBC_2.14'"

The solution essentially involved installing glibc-static and making the necessary updates to the file.  See "Link 3" below for details on how to resolve the error.

To install Oracle 11g Enterprise, first follow the steps in the post at "Link 1" below, but you will likely encounter the error described above during the "Link Binaries" phase of the install.  If you do, then follow the steps in "Link 3" to resolve the issue with "".

Link 1:

Link 2:

Link 3:

Tuesday, May 12, 2015

Oracle Enterprise Manager 11g Installation Error - Listener is not up or database service is not registered with it.

While attempting to install Oracle Enterprise Manager for Oracle 11g during the last phase of the Oracle 11g installer, it encountered the following error.

After checking to ensure the listener was up and doing a tnsping to the instance via the cmd window, I was at a loss so proceeded to Google to try to find the answer.  After some digging, it seemed to be a fairly common problem.  However, none of the solutions worked for me.  At least none of them individually.  After piecing together several solutions that others posted I finally was able to get the installation to succeed.  Hopefully this helps you if you are stuck in a similar situation.

Note that this installation is on a machine without a static ip or domain.

If you haven't done so already, make sure to add the <ORACLE_HOME>\bin directory to your path so that you can run the Oracle utilities without having to be within the <ORACLE_HOME>\bin directory itself.

Let's get started!

First, install the Microsoft Loopback Adapter.  This will allow you to specify a dummy host/domain on the loopback ip.  See the following Microsoft TechNet post for details:

Once you have the loopback adapter created and have updated your hosts file, stop the Oracle listener via the command line using the command "LSNRCTL.EXE stop".  Use the "Net Configuration Assistant" to remove the listener and add a new one with all the default values and the same name.

Next, you will need to update your listener.ora and tnsnames.ora files to set the host as the dummy host/domain you specified in the hosts file.

The listener.ora file located at <ORACLE_HOME>\network\admin will contain something similar to the following:


The tnsnames.ora file located at <ORACLE_HOME>\network\admin will contain something similar to the following:

<DB_SID> =


Now you will need to start the listener back up again using the command "LSNRCTL.EXE start".

Ensure the listener is up by using "tnsping <db sid>".

Next run the command "emca -config dbcontrol db -repos recreate" as Administrator and follow the configuration prompts displayed.  If it completes successfully it will also list the URL you need to go to in order to view the Enterprise Manager page.

Thursday, March 12, 2015

Parsing Java Source Files Using Reflection

Have you ever needed to parse a Java source file, but didn't want to write a parser for it?  Well, you can by taking advantage of the Java compiler programmatically to compile the source files into class files then using a URLClassLoader load each class into memory and use reflection to get the information you need.  Let's take a look at how this works.

First you need to get a collection of all of the Java source files you wish to compile so that you can pass it to the compiler.

File packageBaseDir = new File("path/to/the/base/dir/of/the/source/files");
List<File> sourceFiles = new ArrayList<>();

public void collectSourceFiles(File packageBaseDir, List<File> sourceFiles) {
    File[] filesInCurrDir = packageBaseDir.listFiles();

    for ( File file : filesInCurrDir ) {
        if ( file.isDirectory() ) {
            collectSourceFiles(file, sourceFiles);
        else if ( file.getName().endsWith(".java") ) {

Now that you have all of the source files, you need to access the Java compiler to compile them.

void compileSourceFiles(List<File> sourceFiles) {
        JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
        StandardJavaFileManager fileManager = compiler.getStandardFileManager(null, null, null);
        Iterable compilationUnits1 = fileManager.getJavaFileObjectsFromFiles(sourceFiles);
        JavaCompiler.CompilationTask task = compiler.getTask(null, fileManager, null, null, null, compilationUnits1);;

The above code is accessing the Java compiler programmatically, using the StandardJavaFileManager to get the Java source as JavaFileObjects in order to pass to the compiler. A CompilationTask is created and then run on the source files. The source files should output to the same directory as the Java source. Now that the source is compiled, you can use a URLClassLoader to load the classes into your program.

URLClassLoader urlClassLoader = new URLClassLoader(
                    new URL[]{packageBaseDir.toURI().toURL()},

Then you can simply load the class and start using the standard reflection methods on it.

Class clazz = urlClassLoader.loadClass(binaryClassName);
Methods[] methods = clazz.getDeclaredMethods(); // or whatever else you're interested in

Happy coding!

Wednesday, January 7, 2015

Setup UPS with Synology Disk Station and CentOS Linux Server via USB and Network

Setting up a UPS that will automatically cause a Synology DiskStation to enter safe mode and CentOS 7 server was fairly straightforward, however there was not a lot of information showing how to do this so I decided to write this post.

I will describe how to setup a UPS connected to a Synology Disk Station via USB which will notify a machine running CentOS and the upsmon service when UPS events occur so that they can respond to a power outage accordingly.  

DiskStation Setup

Connect your UPS to your Synology NAS using a USB cable.

Using built in support for a UPS Network Server on the Synology NAS (which uses NUT from under the covers) setup is very easy.

Login to your DiskStation and go to the Control Panel.  Select the "Hardware and Power" icon and go to the "UPS" tab.  You should see something similar to the screen shown below.

Select "Enable UPS Support" to enable communication with the UPS via the USB cable.

You can optionally set a period of time before the NAS enters Safe Mode or leave the default which will cause the NAS to enter Safe Mode when the UPS battery reaches a low status.  Safe mode un-mounts all disks and stops all services to prevent data loss on your NAS.

Next, check the "Enable network UPS server" box.  Then click "Permitted DiskStations".  Even though it says "Permitted DiskStations", it will work with any machine running the NUT upsmon service.  Once you click on the "Permitted DiskStations" button you will be presented with a form to fill out the IP's of the servers you want to notify when the NAS you're on receives UPS events.

Enter the IP of the server that you want to receive the UPS events and click "OK".  Then "Apply" on the main UPS page.

Linux Server Setup (CentOS)

First you'll need to install nut via yum.

If you don't already have the epel repository in yum, you will need to install it.

yum install epel-release

Then you will need to install nut.

yum install nut

Once nut is installed you should have a nut user and group created by the installer.

Open /etc/ups/upsmon.conf.  We will need to update the configuration to allow it to listen for events from the Synology server.  Search for the "MONITOR" section.  You will need to update or add a line that looks like the following:

MONITOR ups@<ip of synology server>:3493 1 <user> <pass> slave

To get the user and pass values, SSH to your Synology NAS.  In the file located at /usr/syno/etc/ups/upsd.users it should specify the username and password.  Use those values in the MONITOR line on the Linux server.

Also, be sure to look at the "SHUTDOWNCMD"  in upsmon.conf and ensure it halts the system instead of shutting it completely down so that it will come up automatically after a power outage.  This is the default in the file so you shouldn't have to change anything.  In your Linux server machine BIOS you need to also ensure it is setup to automatically power on after the power is restored.

On your Linux machine you'll need to create a directory /var/run/nut.  Change ownership of the directory to user nut and group nut.

chown nut:nut /var/run/nut

Modify /lib/systemd/system and remove nut-server.service from the nut-monitor.service file if you are not running a nut server on the linux box as this will prevent it from starting upsmon.  Since this machine is setup as the slave, you probably won't be running a nut server so make sure you take the entry out.

Next you will need to add upsmon to startup when your server starts.  Go to /etc/systemd/system.  Create a symbolic link to the nut-monitor.service.

ln -s /lib/systemd/system/nut-monitor.service nut-monitor.service

Finally, run the following commands to enable the service to be run by systemctl.

systemctl daemon-reload
systemctl enable nut-monitor.service

That's it!  When you experience a power outage your UPS should kick on, communicate with your DiskStation to enter safe mode, and the DiskStation should communicate with the Linux server to halt it and when the power comes back up the DiskStation and Linux Server should automatically start back up.