HtmlCompressor and Guava – LimitInputStream ClassNotFoundException in Java when using Google Closure Compiler

So I started off my week by upgrading my organization’s code base with the latest available libraries to make it more efficient. Little did I know that the HTMLCompressor will make it a hard day for me. Here is what I was doing :-

I upgraded all libraries including the HtmlCompressor library to 2.4.8, and Guava library to version 23. I was able to make the build a seamless affair. But, as soon as I ran the code, I was greeted with a Runtime Exception, which was :-

HtmlCompressor provides two inbuilt implementations of JavaScript compressors :-
1. YUI Compressor
2. Google Closure Compiler

I was using the Google Closure Compiler

After scratching my head for sometime, I found out this was an issue with the HtmlCompressor itself.
HtmlCompressor uses a class ClosureJavaScriptCompressor class, which is an implementation over Google Closure Compiler.

This class internally uses the LimitInputStream class, which is a part of the Guava library. Now, Guava library version 14 have the LimitInputStream class Deprecated, and in version 15, it was removed. More information at Deprecated LimitInputStream

So, if you have a later version of the Guava library in your classpath, the older library gets overwritten, and Google Closure throws the above exception.

This exception is a RuntimeException, not a compile time exception, because it is a part of the compiled jar, which causes this exception only at Runtime.

How to correct this ?

There are two ways to correct this :-

Way I :
Use the outdated Guava library. The LimitInputStream class can be found in Guava version <= 14, though it is marked as deprecated in version 14, and was removed in version 15.

Way II :
Use your own implementation of Compressor. HtmlCompressor gives this provision in case you want such a scenario, using :-

The only requirement being that your Custom Class must implement the Compressor Interface provided by HtmlCompressor.

You can read more about the HtmlCompressor to suit your needs.

This is how I did it. Please put your questions and suggestions in the Comments block, and I will try and answer them, or incorporate them in this post.

How to make a part of the page non-indexable for Google Crawler and Bot

We all have faced a situation, where we want to index a page, but keep some parts of it as un-indexable. This can be for content, which is being consumed from a third-party.

For example, TripAdvisor provides user reviews and other content through its APIs. Any website can buy them and start showing TripAdvisor’s content on their site. But, as per the basic concept of SEO, this leads to Content Duplication , and may result in the website being penalized. TripAdvisor got the content first on their pages, and so they will never be penalized.

So, how do we go about making sure such content is not indexed by crawler, but still visible for User Experience ?

Google provides the googleon and googleoff tag. It is written as follows :-

All you need to do is, put the ‘googleoff’ tag, place your content which you don’t want to be indexable, and then put the ‘googleon’ tag to make the crawler resume indexing. An example is,

More about other such lesser known tweaks at :-

Google Guildelines

How to setup Data Analytics Visualization Tool – Redash on Docker

This blog post is about How to setup Redash on Docker.

Redash is a tool which separates the Data Fetching logic from the Visualization Step. It lets you directly feed Queries for different Data Sources, such as MongoDB, ElasticSearch, Amazon RDS, MySQL, Google BigQuery, amongst others.
All these queries can be mapped against predefined Visualizations, such as graph, bar charts, box plots etc. These, visualizations can then used to create Dashboards. So, as you must have figured out, different data, from different data sources, can be added in one Dashboard. How about that!

Redash provides an on-premise solution, which you can download and set it on your own servers, or a paid subscription where all you need to worry about is your queries and Data Sources.

More can be found at Redash.

Redash provides setup guidelines for all leading cloud hosting services, such as Amazon Web Services (AWS) and Google Cloud. The installation is fairly straight-forward, and can be found at :-
On-Premise Setup for Amazon Web Services and Google Cloud.

In this post, I will explain how to setup Redash on Docker. Steps are as follows :-

STEP I : Install Docker

Docker installation is fairly straight-forward for all Operating Systems, and can be found online easily. This blog post deals with Mac OS X and Linux Operating System Flavours.

STEP II : Configure Setup

We will use an Ubuntu Image as a base image for this installation. I have added codes for two files. Download them for usage, or you can make your own 😉

Entry Point : supervisor.sh

This script will act as an Entry Point. An Entry Point is a script which is run every time a docker instance boots up. Docker needs a process to keep running in order for the instance to keep running. Thus, the last statement we put is a tail command.

And,
DockerFile : docker-redash

This uses a base Ubuntu image, adds up all required packages, downloads the Redash Script, and executes and sets up everything you need. It is pretty self explanatory.

STEP III : Execute commands for final setup

This will take sometime to run, but should run without any hassles.

Once this executes completely, you should get a ready Docker Image. Execute the following command to boot your instance :-

Now, go to http://127.0.0.1:9876 and enjoy! 😉

Please post comments in case you find problems, and I will try and help you out!

Gradle not adding class files to war archive

I have this J2EE Web Application, which is Gradle based. Recently, I ran into a problem where Gradle was not compiling the java files, but was not adding these files into the war file, or the exploded folder.

A similar type of problem would be, where Gradle is making an incomplete war file or an incomplete exploded folder.

This, turned out to be a rather strange issue, where only thing I had to do, was to delete the .gradle folder found in the project folder.

.gradle folder holds settings, cache files and other information for building the project. If you delete it, gradle will recreate it on next build.

Fairly easy, it was!

Setting up Cpuminer for Monero Mining on Ubuntu 16.04

This post is about setting Cpuminer, for Monero Mining on Ubuntu 16.04. This example uses a MinerGate pool. More about MinerGate is at https://minergate.com/

Monero mining is one of the Cryptocurrencies, which still rely on CPU Mining, along with GPU mining, entirely due to its design and model.

The main website for this cryptocurrency is https://getmonero.org/. You can find more information on this site.

Following is a script which sets up an ubuntu system :-

When this gets over, all you need to is run the miner daemon, an example of which is as follows :-

And your daemon starts in the background, and now you will start earning monero coins 🙂

Gradle Tasks for JavaScript and CSS Minification and Combination using ThreadPoolExecutor for Parallel Execution

This is my first post for this blog. And, I am going to start with a posting a task I created for Gradle, for minification and combination of JS and CSS, that too using a ThreadPoolExecutor for faster minification and combination
. Following is an explanation of each section, and finally there will the complete code that you would have to add to your Gradle Build Script. So, lets go!

First, we include the important stuff to start :-

Now, we will write a snippet, which will represent all JS files in a code base :-

This will find all JavaScript files, recursively in all underlying folders. Let’s write something similar for CSS, shall we ? :-

Easy ? Good!

The idea for minification and combination, is that, we need to create a task for each file for minification, and each section for combination. That is, we create tasks dynamically, as follows :-

All tasks here will be created with a name jsMinifyTask_{index}. Again, one task per file for minification.

Something similar for CSS :-

This code will create tasks for minification of CSS files, one at a time.

For combination, I wrote a small JSON structure to make it easy for maintenance. The JSON is :-

This is pretty self explanatory, I guess ? Add up more to make more combined files.

The code which uses this to create tasks for combining files is :-

So now, we are done with creating tasks, and it is time to make them run! I created a wrapper task for this. This will minify all files parallely, and then combine them, parallely. Makes it faster 😉

Runtime.runtime.availableProcessors() will find all available processors and use them.

All that is required now is, writing a simple ‘dependsOn minify’ in the tasks you already have in your build scripts. Voila!

For reference, the complete code is :-

Let me know in comments in case you have problems 🙂