Protobuf Apache Pulsar with Spring Boot

Protocol buffers(protobuf) are language and platform neutral mechanism to serialize structured data. We define the data structure in protobuf format and generate source code to write and read data to and from a variety of data streams and using a variety of languages.

Protocol buffers currently support generated code in Java, Python and more.

Apache Pulsar is a cloud-native, distributed messaging and streaming platform.

In this blog we are going to use a simple protobuf message and use protobuf-maven-plugin to generate Java source code and use protobuf as message format(schema type) for Apache Pulsar pub-sub application.

1) Protobuf model + Java Source generation:

By default protobuf-maven-plugin looks at src/main/proto folder for the .proto files. We have following proto files to represent Person and Greeting objects


syntax = "proto3";
package app.model;
message Person {
string fName = 1;
string lName = 2;


syntax = "proto3";
package app.model;
message Greeting {
string greeting = 1;

Here's the basic configuration for protobuf-maven-plugin. It also requires os-maven-plugin. Please refer to the github project for the full source.




2) Spring Boot + Apache Pulsar

PulsarClient Bean: Spring Boot doesn't have an official auto configuration for Apache Pulsar yet. So, we have to create the PulsarClient bean ourselves.


PulsarClient pulsarClient() throws PulsarClientException {
return PulsarClient.builder()

Also the Producer. Please note the parameter of newProducer. We are using PROTOBUF  as the schema type.

Producer bean: We will need to create the producer bean for Person model at main-app and for Greeting model at greeting-service.

Producer<THE_MODEL> personProducer(PulsarClient pulsarClient) throws PulsarClientException {
return pulsarClient.newProducer(Schema.PROTOBUF(THE_MODEL.class))


Pulsar Listener Config: We can register the message listener using @PostConstruct as follows

final PulsarClient pulsarClient;

private void initConsumer() throws PulsarClientException {
.messageListener((consumer, msg) -> {

//message handler logic


That's all the configuration we will need.

3. How to run?

1) Clone the project and run mvn clean compile to generate java sources and compile the project

2) Run apache pulsar instance using docker(you can run it manually)

docker run -it   -p 6650:6650   -p 8080:8080   apachepulsar/pulsar:2.2.0   bin/pulsar standalone

3) Start GreetingApp and MainApp

4) Hit localhost:8082/greet/Joe/Biden to publish a message. You will see this being received by greeting-service and the greeting being sent back to the queue which will be received by main-app.

The project structure:

├── pom.xml
├── greeting-service
│   ├── pom.xml
│   └── src
│       └── main
│           ├── java
│           │   └── gt
│           │       └── greeting
│           │           └──
│           └── resources
│               └──
├── main-app
│   ├── pom.xml
│   └── src
│       └── main
│           ├── java
│           │   └── gt
│           │       └── mainapp
│           │           └──
│           └── resources
│               └──
├── protobuf-model
│   ├── pom.xml
│   └── src
│       └── main
│           └── proto
│               ├── Greeting.proto
│               └── Person.proto



  • GitHub project

Read all table and columns in Jpa/Hibernate

How to get metadata about all table and columns managed by JPA/Hibernate?

There are many ways to get a list of table and columns in your project that uses JPA/Hibernate. Each has pros and cons.

Option A) Direct Query on INFORMATION_SCHEMA.

Simplest way is to do direct query on INFORMATION_SCHEMA  or similar schema that the database internally uses.

For MySQL, H2, MariaDB etc the following would work. We will need specific query for each database.


Option B) DB Independent query using JDBC API

We can do DB independent query by using JDBC API to return the Metadata. It would use DB specific query provided by DB driver to return the metadata.

DataSource ds = ; //create/wire DataSource object
DatabaseMetaData metaData = ds.getConnection().getMetaData();
ResultSet schemasRS = metaData.getSchemas();
ResultSet tablesRS = metaData.getTables(null, null, null, new String[]{"TABLE"});

We can iterate over the ResultSet to get the schema, table and columns. It would return everything that the Database has.

Option C) Use EntityManager MetaModel to read Entity classes

In order to retrieve only the entity/tables that the application uses, we can rely on some Java Reflection magic as following:

EntityManager em; //autowire the bean
MetamodelImplementor metaModelImpl = (MetamodelImplementor) em.getMetamodel();
List<String> tableNames = metaModelImpl
.map(ep -> ((AbstractEntityPersister) ep).getTableName())

Option D) Use Hibernate Magic

Use hibernate's Metadata class that stores the ORM model determined by provided entity mappings.

org.hibernate.boot.Metadata metadata; //getting the Metadata is tricky though

for (PersistentClass persistentClass : metadata.getEntityBindings()) {

Download Files from FTP using JSch java library

SSH provides support for secure remote login(login to remote server similar to putty), secure file transfer(SCP or FTP/SFTP download), and secure TCP/IP and X11 forwardings. JSch is a Java implementation of SSH2 protocal.

In this example we will see how we can use JSch library to login to SFTP server and download files.

First, add the following dependency to your pom.xml
       <version>0.1.54</version>  <!-- or latest version -->

JSch apis are pretty simple. First you create a session and open a channel then you can use one of the many function such as CD, LS, PUT, GET to change directory, list content, upload file or download respectively.

Create Session:

JSch jsch = new JSch();
Session session = jsch.getSession("demo", "", 22);

Create Channel:

ChannelSftp channel = (ChannelSftp) session.openChannel("sftp");

Change folder:"/a/folder");

List content of a folder:

Vector<ChannelSftp.LsEntry> entries =;

Download file:

channelSftp.get(String fileNameInFtp, String  destinationFile);

Upload File:

channelSftp. put(String src, String dst) //default is overwrite
channelSftp. put(String src, String dst, int mode)
Upload Modes:
public static final int
public static final int RESUME=1;
public static final int APPEND=2;

A Complete Example Code to download files from FTP:

In this example, we are using a publicly available ftp server  as described in
 import com.jcraft.jsch.*;  
 import java.util.*;  
 public class JschDownload {  
public static void main(String[] args) { Session session = null; ChannelSftp channel = null; try { JSch jsch = new JSch(); session = jsch.getSession("demo", "", 22); session.setPassword("password");
//to prevent following exception for sftp //com.jcraft.jsch.JSchException: UnknownHostKey: RSA key fingerprint is .. Properties config = new Properties(); config.put("StrictHostKeyChecking", "no"); session.setConfig(config); session.connect(); System.out.println("session connected");
//various channels are supported eg: shell, x11, channel = (ChannelSftp) session.openChannel("sftp"); channel.connect(); System.out.println("channel connected");
downloadFromFolder(channel, "/"); downloadFromFolder(channel, "/pub/example/");
     //in order to download all files including sub-folders/sub-sub-folder, we should iterate recursively System.out.println("File Uploaded to FTP Server Successfully.");
} catch (Exception e) { e.printStackTrace(); } finally { if (channel != null) { channel.disconnect(); } if (channel != null) { session.disconnect(); } } } static void downloadFromFolder(ChannelSftp channelSftp, String folder) throws SftpException { Vector<ChannelSftp.LsEntry> entries =; new File("download").mkdir();
//download all files (except the ., .. and folders) from given folder for (ChannelSftp.LsEntry en : entries) { if (en.getFilename().equals(".") || en.getFilename().equals("..") || en.getAttrs().isDir()) { continue; }
System.out.println("Downloading " + (folder + en.getFilename()) + " ----> " + "download" + File.separator + en.getFilename()); channelSftp.get(folder + en.getFilename(), "download" + File.separator + en.getFilename()); } } }

Spock - call mocked method multiple times and return different results for same input

Spock - return different mock result for same input

In unit testing, we create and use mock objects for any complex/real object that's impractical or impossible to incorporate into a unit test. Generally any component that's outside of the scope of unit test are mocked. Mocking frameworks like JMock, EasyMock, Mockito provide an easy way to describe the expected behavior of a component without writing the full implementation of the object being mocked.

In Groovy world, the Spock testing framework includes powerful mocking capabilities without requiring additional mocking libraries.

In this article, I am going to describe how we can create mock objects that can be called multiple times and and each time they return multiple values.

First, let's see how we can return Fixed Values from a mocked method: for this, we use the right-shift (>>) operator to return a fixed value:
//the input parameter is _(any) and it will return "ok" everytime
mockObj.method(_) >> "ok"
To return different values for different invocation
//return 'ok' for param1 and 'not-ok' for param2
mockObj.method("param1") >> "ok"
mockObj.method("param2") >> "not-ok"

Finally, to return different values for same parameter: for this, we use triple-right shift (>>>) operator.
mockObj.method("param") >>> ["", "ok", "not-ok"]

It will return empty value for first invocation, "ok" for second and "not-ok" for third and rest of the invocation.

GraalVM setup and generate native image using maven plugin

GraalVM Setup and native image generation using Maven plugin

Today we are going to generate native image(one of many features of GraalVM) using GraalVM for the XML Parser that we developed earlier. Native image will contain the whole program in machine code ready for its immediate execution. It has the following advantages:
  • faster startup time
  • no need for JVM(JDK/JRE) to execute the application
  • low memory footprint

1) GraalVM setup

I used sdkman to install GraalVM SDK setup in my Linux machine. I used the following steps. First I listed all available JDK distributions and then I ran sdk install to install the latest GraalVM version. At the end of the installation I selected Yes to enable this version as default JDK.

sdk list java
sdk install java 20.1.0.r11-grl 

Then I verified the installation using following
java -version 

I got the following. So, everything working great so far:
openjdk version "11.0.7" 2020-04-14
OpenJDK Runtime Environment GraalVM CE 20.1.0 (build 11.0.7+10-jvmci-20.1-b02)
OpenJDK 64-Bit Server VM GraalVM CE 20.1.0 (build 11.0.7+10-jvmci-20.1-b02, mixed mode, sharing)

If you want to do it manually, download the zip file and extract it and add to system path and enable that as default JDK.

2) Native image tools installation

Before you can use GraalVM native image utility,  you need to have a working C developer environment. For this:

- On Linux, you will need GCC, and the glibc and zlib headers. 
Examples for common distributions:

    # dnf (rpm-based)
    sudo dnf install gcc glibc-devel zlib-devel libstdc++-static
    # Debian-based distributions:
    sudo apt-get install build-essential libz-dev zlib1g-dev

- On MacOS
    XCode provides the required dependencies on macOS:

    xcode-select --install

- On Windows, you will need to install the Visual Studio 2017 Visual C++ Build Tools

After this, you can run the following to install the native-image utility
$JAVA_HOME/bin/gu install native-image  

Here, $JAVA_HOME is your GraalVM installation directory

3) Finally, use GraalVM native image Maven plugin to generate native-image during package phase

For this, I added the following on my XML Parser's pom.xml file:  


Plugin: It automatically detects the jar file and the main class from the jar file. I've specified the imageName = xmltocsv as the executable

        <!--The plugin figures out what jar files it needs to pass to the native image
        and what the executable main class should be. -->

The version:

And ran with following to generate the native image
mvnw clean package native-image:native-image

 It produced the following files on my target folder

── target
│   ├── xmltocsv   //this is the binary file, it can run without jvm
│   └── xmltocsv-FINAL.jar  //this required JVM to run

4) Testing

In my linux machine I executed the xmltocsv binary
$ ./target/xmltocsv ../big3.xml ../big3.csv

It started faster, used less memory but took little longer(because we lost JVM optimizations) to convert the file than running the jar file to do the same.

The complete example code is available here:

Create a huge data file for load testing

In this short blog, I am going to describe how you can create a big file for load testing. In most cases, you will only need step #2 to combine join big files.

For my earlier blog read-huge-xml-file-and-convert-to-csv, I needed to create a very big xml file (6GB +) without crashing my machine !!

My sample XML file would look like following with millions of <book> element.

   <book id="001" lang="ENG">  
   <book id="002" lang="ENG">  

1) Since the start of the file contained <catalog> and file ended with </catalog>, I striped the start and end line and created a small file with just a few <book> elements.

<book id="001" lang="ENG">  
 <book id="002" lang="ENG">  
 <book id="003" lang="ENG">  
 <book id="004" lang="ENG">  

2) Used 'cat' to join files. The following would create bigger.xml by combining five small.xml files

cat small.xml  small.xml  small.xml  small.xml  small.xml >> bigger.xml

I can further do the following to gain exponential file size

cat bigger.xml  bigger.xml  bigger.xml  bigger.xml  bigger.xml >> evenbigger.xml

3) finally I used 'sed' to add <catalog> at beginning  and </catalog>  at end to create a proper xml file

 sed -i '1s/^/<catalog> /' bigger.xml
 sed -i -e '$a</catalog>' bigger.xml

4) Let's verify using tail and head

  head -10 bigger.xml
  tail -10 bigger.xml

I can see the <catalog>  at start and </catalog> at end. Hurray....

java read huge xml file and convert to csv

SAX parser uses event handler org.xml.sax.helpers.DefaultHandler to efficiently parse and handle the intermediate results of an XML file.  

It provides the following three important methods on each event where we can write custom logic to take specific action at each events:
  • startDocument() and endDocument() – Method called at the start and end of an XML document. 
  • startElement() and endElement() – Method called at the start and end of a document element.  
  • characters() – Method called with the text contents in between the start and end tags of an XML document element.
We will be using this class to read a HUGE xml file (6.58GB, it should support any size without any problem) efficiently and convert and write to CSV file.

I am going to use my existing code from my old blog xml-parsing-using-saxparser and updating it for this purpose. The final code is available on github project java-read-big-xml-to-csv

Java HUGE XML to CSV - project structure

How to Import/Run:

Its a simple maven project(with no dependencies). You can import it into your IDE or  use command line to compile and run.
If you plan on using Command Line, to compile and create a runnable jar file, go to the root of the project and run mvnw clean package .
Then you can run the executable as following:
java -jar target\xmltocsv-FINAL.jar  C:\folder\input.xml  C:\folder\output.csv

The code:

SaxParseEventHandler class takes the RecordWriter as constructor parameter
public SaxParseEventHandler(RecordWriter<Book> writer) {

We create new book record on startElement event
public void startElement(String s, String s1, String elementName, Attributes attributes) { /* handle start of a new Book tag and attributes of an element */ if (elementName.equalsIgnoreCase("book")) { //start bookTmp = new Book();

and we write the parsed book data to file on endElement() event.
public void endElement(String s, String s1, String element) { if (element.equals("book")) { //end writer.write(bookTmp, counter);

Its a simple wrapper for FileWriter to write content to file. We are currently writing T.toString() to file.
public void write(T t, int n) throws IOException { fw.write(t.toString()); if (n % 10000 == 0) { fw.flush(); } }

Its the main 'launcher' class
SAXParserFactory factory = SAXParserFactory.newInstance(); try (RecordWriter<Book> w = new RecordWriter<>(outputCSV)) { SAXParser parser = factory.newSAXParser(); parser.parse(inputXml, new SaxParseEventHandler(w)); }

Results at 16GB RAM, Core i5, 6MB L3 cache, SSD | Windows Machine
Max RAM usage: 190MB
Time Taken:
For the file big2.xml with size 118MB
- JDK8 - 8-9 sec
- JDK 11 - 6-7 sec
- JDK 14 - 5 sec 

big3.xml with size 6.58GB takes about 2 minutes

Next Steps: create a binary using GraalVM. I will keep posting !!

Java Compress/Decompress String/Data

Java provides the Deflater class for general purpose compression using the ZLIB compression library. It also provides the DeflaterOutputStream which uses the Deflater class to filter a stream of data by compressing (deflating) it and then writing the compressed data to another output stream. There are equivalent Inflater and InflaterOutputStream classes to handle the decompression.


Here is an example of how to use the DeflatorOutputStream to compress a byte array.
static byte[]compressBArray(byte[]bArray) throws IOException{
        ByteArrayOutputStream os=new ByteArrayOutputStream();
        try(DeflaterOutputStream dos=new DeflaterOutputStream(os)){
        return os.toByteArray();

Let's test:

byte[] op = CompressionUtil.compressBArray(input);
System.out.println("original data length " + input.length +
        ",  compressed data length " + op.length);

This results 'original data length 71,  compressed data length 12'


Let's test:

public static byte[] decompress(byte[] compressedTxt) throws IOException {
        ByteArrayOutputStream os = new ByteArrayOutputStream();    
        try (OutputStream ios = new InflaterOutputStream(os)) {
        return os.toByteArray();
This prints the original 'input' string.

Let's convert the byte[] to Base64 to make it portable

In the above examples we are getting the compressed data in byte array format (byte []) which is an array of numbers.

But we might want to transmit the compressed data to a file or json or db right? So, in order to transmit, we can convert it to Base64 using the following

byte[] bytes = {}; //the byte array    
String b64Compressed = new String(Base64.getEncoder().encode(bytes));
byte[] decompressedBArray = Base64.getDecoder().decode(b64Compressed);
//convert to original string if input was string
new String(decompressedBArray, StandardCharsets.UTF_8);

Here's the complete code and the test cases

package compress;

import java.nio.charset.StandardCharsets;
import java.util.Base64;

public class CompressionUtil {

    public static String compressAndReturnB64(String text) throws IOException {
        return new String(Base64.getEncoder().encode(compress(text)));

    public static String decompressB64(String b64Compressed) throws IOException {
        byte[] decompressedBArray = decompress(Base64.getDecoder().decode(b64Compressed));
        return new String(decompressedBArray, StandardCharsets.UTF_8);

    public static byte[] compress(String text) throws IOException {
        return compress(text.getBytes());

    public static byte[] compress(byte[] bArray) throws IOException {
        ByteArrayOutputStream os = new ByteArrayOutputStream();
        try (DeflaterOutputStream dos = new DeflaterOutputStream(os)) {
        return os.toByteArray();

    public static byte[] decompress(byte[] compressedTxt) throws IOException {
        ByteArrayOutputStream os = new ByteArrayOutputStream();
        try (OutputStream ios = new InflaterOutputStream(os)) {
        return os.toByteArray();


Test case:

package compress;

import org.junit.jupiter.api.Test;

import java.nio.charset.StandardCharsets;

public class CompressionTest {


    void compressByte() throws IOException {
        byte[] input = testStr.getBytes();
        byte[] op = CompressionUtil.compress(input);
        System.out.println("original data length " + input.length + ",  compressed data length " + op.length);
        byte[] org = CompressionUtil.decompress(op);
        System.out.println(new String(org, StandardCharsets.UTF_8));

    void compress() throws IOException {

        String op = CompressionUtil.compressAndReturnB64(testStr);
        System.out.println("Compressed data b64" + op);
        String org = CompressionUtil.decompressB64(op);
        System.out.println("Original text" + org);


 Note: Since the compress and decompress method operate on byte[], we can compress/decompress any data type.

Game of Thrones Style farewell email to coworkers

I left my job for another opportunity after working for 8.5 years. Since I had nothing left to do on my last day, I decided to get little creative and looked up online about how do write farewell email on Game of Thrones style. With little help from internet, I came up with the following email.

My lords and ladies,

It has been a great honor serving this most noble software development house. I fought for this house with all my heart for 8.5 years and we won some glorious battles together. Songs will be sung for the next thousand years about our great victories over White Walkers(Bugs and Issues).

It’s a bittersweet ending for me to part ways and go North of the Wall now(NEW_COMPANY is north from OLD_COMPANY).

Whatever I do in the future, I now take this pledge in the sights of Old Gods and the New, that I will forever cherish the time I spent here and I hope you Lords and Ladies do the same.

My Watch has ended but don’t forget to send me a raven from time to time on 203-XXX-XXXX. You can also find me wandering around Citadel at 

Winter is coming 😝 in 5 months. Just ended though (ITS MINNESOTA).


Ganesh Tiwari | A Crow Member

Project to test your programming skills

Project to test your programming skills

A Guessing Game - to become Full Scope/Stack Developer

If you are wondering what would be a perfect project to practice your programming skills. You are in the right place!
It's a simple number guessing game. We start with a console app and migrate to a web app with lots of features.


1. Console App:

  • Read a number N from the console between a range (MIN, MAX). Your code should then generate a random number CN between the same range(MIN, MAX) and compare if the computer-generated random number CN and the user entered number N matches.
    If it matches, the user wins. If it doesn't match, the computer wins.

2. (Optional) Desktop app:

  • Create an interface to enter the MIN/ MAX number and the user guess.
    MIN :        [ Textbox  ]
    MAX:         [ Textbox  ]
    User Guess   [ Textbox  ]
Also provide a button with label "Play" that generates the random number and displays a message in Label if the user won.
    [ Play  ]
  • Save the win/loss counts and winning/losing number in a text file 'stat.csv'. Display the average win/loss on UI when the user closes the app.
    2020-05-10 20:01:50, USER, 5, 10
    2020-05-10 20:02:50, USER, 3, 4
    2020-05-11 20:05:50, COMPUTER, 7, 9

3. Two-player:

  • Update the GUI or Console application to allow two users to play with the computer. Both of the users can enter their guess and click Play. The user that made the correct guess will win
    GUI mockup:

    User A's Guess   [  Text Box  ]
    User B's Guess   [  Text Box  ]

                     [    PLAY    ]

4. (Optional) Multi-Player Game - over socket connection:

  • Update the GUI application to allow several users to play simultaneously. All the users will have a copy of the application and can join the Game by running the application on their computer. The first user to start the game can act as a Server.

5. Web Application:

  • Create a web application to play the same game in the browser. Reuse the previous code on the backend
  • Support single-player mode (play with the computer).
  • Add a sign-up page to register users. Update logic to allow only the registered/logged-in users to play. Use ReCaptcha to prevent robot making requests.
  • Block users from playing more than 1 hour. Lock them for 2 hours.
  • Multi-player: list online users and provide the ability to request/accept to play with the user. Use WebSocket to listen for updates in realtime.
  • Store the win/loss statistics into DB.
  • Generate a CSV report with stats about the winner, numbers, etc that you can download it from the web interface.

 6. Fun Stuff

  • Schedule the 'winner stat' report to run every day and deliver it to your email address.
  • Setup a background job that sends an account deactivation email if the user is not logged-in in last 20 days
  • Setup a background job to deactivate user if the user is not logged-in in the last 30 days
  • Setup a public web API to expose information about the winners
  • Use caching to read user profile from the cache instead of reading from DB on every request

7. Operation:

  • Setup a Dockerfile script to run your app in docker
  • Setup static code analysis with local SonarQube instance. You can use docker to run SonarQube. Take care of SonarQube warnings.
  • Deploy your app in a cloud environment (eg Heroku, AWS, Azure)


  • Focus on readability, reusability throughout the development.
  • Try to make your app modular
  • Use the build system
  • Use git


Want to update this?

Please submit a PR at