Friday, October 14, 2016

Pivotal Cloud Foundry (PCF) Integration with Elastic Cloud Storage (ECS)

Recently, I was involved in integrating Pivotal Cloud Foundry with Elastic Cloud Storage (ECS), an object storage solution from EMC.

In this post, I'm going to document the hiccups we faced during this integration and how did we resolve this, so that it is easier for other folks who would like to carry out this integration.



References:

and the service broker code from git hub,https://github.com/spiegela/ecs-cf-service-broker



1. application.yml file Configuration:


First task would be to update the application.yml file in the broker code to have the correct configuration.

Note the spring profiles created in yml file. The active spring profile is defined in the build.gradle file to be 'development'. So based on that, we need to update the correct section in the yml file.

Under the broker section:

a) Provide a valid ECS namespace name (The namespace name is case sensitive). Under this namespace, PCF would create a bucket to store all the metadata related to this integration.


b) Provide a valid ECS replication group name (case sensitive).



c) Provide a management end point, which would generally be a https end point (with port 4443).
For eg, https://10.20.30.40:4443

d) Provide an object end point, which in our case was same as management end point (or /object/bucket suffix in the url)

e) Add a password property in this section and set it to the ECS password (This attribute is missing in the application.yml file, but you can see this property in the BrokerConfig.java file)

2. Enable SSL handshake communication:


Second task is to enable SSL handshake between the Service Broker and ECS. The broker uses the public key file localhost.pem file which is present in src/main/resources folder. We will need to replace this file with a public key file corresponding to our ECS installation.

Let's export the public certificate from our ECS application. 
a) Open the ECS application in a browser (say Chrome)

b) Follow these steps to save the certificate from ECS to local file system.
http://docs.bvstools.com/home/ssl-documentation/exporting-certificate-authorities-cas-from-a-website
c) Lets say in step b) above, the file was saved as ecscert.cer

d) Now , we will need to convert the public key file format from .cert to .pem. We will use Java keytool for this. There could be other tools for performing this step as well.

e) Run the following commands from command prompt. We are creating a sample keystore temporarily. We have named it as 'mytest', it would be temporary and doesnt matter. While creating this, it would ask for a password which should be remembered as it is required in further steps.
In the third command, below provide path to the ecscert.cer. If you are running these commands from same directory as the cert file, then provide the file name, otherwise complete path to the file.  
keytool -genkey -alias test -keystore mytest
keytool -delete -alias test -keystore mytest
keytool -import -trustcacerts -alias test -file ecscert.cer -keystore test.keystore
keytool -exportcert -alias test -file localhost.pem -rfc -keystore test.keystore
After the fourth command above, it would create a new file localhost.pem file which is what we would need.
f) Copy the above localhost.pem to src/main/resources and replace the existing localhost.pem file.

3. Service Broker security:


The service broker application uses spring security, so it uses a default username called 'user' and a password as defined in the below section (depending on the spring profile which you choose):

security:
  user:
    password: password
So, with the above config, the broker would be secured using credentials user/password.

4. Service Broker API Version:


Cloud Foundry comes with different Service Broker API versions and the broker application has to be compatible with it. This broker application uses API version 2.8, but your Cloud Foundry might expect a different version. You can use declare a bean to provide a new BrokerAPIVersion(). In our case, we simply set  the brokerApiVersion field property in BrokerConfig.java to 2.7. 


5. Push Service Broker app to Cloud Foundry:


The service broker application should be pushed to Cloud Foundry just like any other application. Sometimes is better to run this application locally to check if it is working fine.
a) Build the application using 'gradlew assemble'

b) Run the application using java -jar build/libs/ecs-cf-service-broker-0.0.1-SNAPSHOT.jar to see if it starts without any issues.

 c) Push the application to Cloud Foundry. We used memory of 750M for this.


6. Register Service Broker with Cloud Foundry:


Once the application is pushed successfully, we need to register the broker, so that it would appear in Cloud Foundry marketplace.
Run these commands after logging into CF CLI as admin.
a) cf create-service-broker ecs-broker user password https://ecs-broker-url
Note that the user & password above, are the broker credentials configured in step 3. The url is the service broker application URL, which we get after pushing to Cloud Foundry.

b) cf enable-service-access  ecs-namespace

c) cf enable-service-access  ecs-bucket 

d) cf marketplace
   
The fourth command above, 'cf marketplace' should display the ecs-broker service.


7. Verify Bucket Creation in ECS:


Login to ECS, go the namespace configured in Step 1. We would see a bucket 'ecs-cf-broker-repository'. This bucket was created by Cloud Foundry as part of integration.

Conclusion:

Bingo!, these steps would help us to successfully integrate ECS with Cloud Foundry and ready to rock and get ready to write cool Cloud Native applications using ECS Object Storage!!








Friday, July 8, 2016

Microservices based Cloud Native Application - Part III

Preview:

This is the third post in the series of Microservices based application development.

The entire series could be found here:

Microservices based Cloud Native Application - Part I

Microservices based Cloud Native Application - Part II


Microservices based Cloud Native Application - Part III


Overview:


Continuing from previous posts, in this post, I'm going to write about a few challenges which I faced while implementing the Microservices and how did I address them. This might hopefully help other folks who might run into similar issues.


Challenges faced while implementing Microservices:


Issue 1:


While using Zuul API, I was getting the following exception, when the angular JS application, invoked the Zuul service.

com.netflix.zuul.exception.ZuulException: Forwarding error

at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:132)
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:157) 
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:78)

We also got the following exception:
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server

Root Cause:

The root cause of both of the above exceptions were same:

The zuul server was failing to register with Eureka server as an Eureka Client.
After analyzing the logs, we found that while communicating with the Eureka Server, there was a mismatch in the API interface signatures. Looked like a version mismatch between Eureka client in Zuul and the Eureka server!

And it was indeed. 

Solution:

In the pom.xml of individual microservices, we were using Eureka clients as:

<dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-eureka</artifactId>
            <version>1.0.6.RELEASE</version>
</dependency>
Clearly, this might lead to confusions and issues, if different microservices define different versions of Eureka client, than the one defined in Eureka server.

To fix this and to bring in consistency in Eureka versions in all microservices, we removed the versioning from individual pom's and introduced dependency management in the parent pom (the pom of the parent project for all microservices).

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Brixton.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
As mentioned above, we used Brixton release and it automatically pulls the correct version of the dependencies defined in the child poms.

So the pom.xml of individual microservices for Eureka client will look like the below. Notice there is no version!

<dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>



Issue 2:

While inovking the Zuul APIs from the Angular JS application, the API calls were failing with CORS (Cross Origin Resource Sharing)issue as below:
"No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access".


Root Cause:

The reason for above issue is that, the angular app was running on a domain 'http://pronet-profile-web.cfapps.io/' and the Zuul App was running on a domain 'http://pronet-edge.cfapps.io/'. Notice that the sub domains are different. This was causing the CORS issue. 

Solution:

Add CORS filter in the Zuul server. Generally Spring recommends adding of an annotation "@CrossOrigin" on the spring boot microservice. However, this solution somehow did not work on the Zuul application (having @EnableZuulProxy annotation).

As an alternate fix, we added the the below filter to the Zuul application to enable CORS:

@Bean
  public CorsFilter corsFilter() {
      final UrlBasedCorsConfigurationSource urlBasedCorsConfigurationSource= new UrlBasedCorsConfigurationSource();
      final CorsConfiguration corsConfig = new CorsConfiguration();
      corsConfig.setAllowCredentials(true);
      corsConfig.addAllowedOrigin("*");
      corsConfig.addAllowedHeader("*");
      corsConfig.addAllowedMethod("OPTIONS");
      corsConfig.addAllowedMethod("HEAD");
      corsConfig.addAllowedMethod("GET");
      corsConfig.addAllowedMethod("PUT");
      corsConfig.addAllowedMethod("POST");
      corsConfig.addAllowedMethod("DELETE");
      corsConfig.addAllowedMethod("PATCH");
      urlBasedCorsConfigurationSource.registerCorsConfiguration("/**", corsConfig);
      return new CorsFilter(urlBasedCorsConfigurationSource);
  }
Note that you need to add the "OPTIONS" method as well.


Issue 3:

While inovking the Zuul APIs from the Angular JS application, the API calls were failing with the below error:
The 'Access-Control-Allow-Origin' header contains multiple values 'http://localhost:63342, http://localhost:63342', but only one is allowed.

Root Cause:
On analyzing this, we found that the individual microservices had a "@CrossOrigin" annotation. We already have a CORS filter on the Zuul server (Fix for Issue 2 above.)
 Adding another filter on the microservice layer, duplicates the CORS filter and hence we were gettting that error.

Solution:

 Removing the @CrossOrigin annotation from the individual microservices solved the issue.

Microservices based Cloud Native Application - Part II

Preview:

This is the second post in the series of Microservices based application development.
The entire series could be found here:


Overview:


Continuing from my previos post, I'm going to explain in detail, three concepts which are essential ingredients of a Microservices Architecture.
  1. Service Discovery
  2. API Gateway
  3. Circuit Breaker

Service Discovery:


In a Microservices environment, we will have multiple services and when the same is deployed in a Cloud Environment, we will  have multiple instances of each service.

In such a scenario, we need services to be self discover-able. This will help in two ways. 

First, when one service invokes another service, it needs to know the actual location where it is hosted and which instance it should point to.
Second, in a cloud environment, when we add/remove instances, other services need to know about this transparently.

Using a centralized Service Discovery will help us solve this problem. Spring Cloud Netflix provides a library called 'Eureka' which will allow services to register to and discover other services.

Following are some code snippets for Eureka client and Eureka Server:

Eureka client: 

Code:
The following annotation needs to be placed in all the microservices which would register into Eureka. (Note that I'm using Spring boot app).


@SpringBootApplication
@EnableAutoConfiguration
@EnableDiscoveryClient
public class AppMain extends SpringBootServletInitializer{

}

Configuration:
The microservice registers into Eureka with a specific name (or serviceId). This could be configured in a file bootstrap.yml.


bootstrap.yml:
server:
  port: 8090

spring:
  application:
    name: profile-details

In order to locate the Eureka server, the client needs to know the server details. This could be configured in a file application.yml.


eureka:
  instance:
    hostname: localhost
    leaseRenewalIntervalInSeconds: 10
    metadataMap:
      instanceId: ${vcap.application.instance_id:${spring.application.name}:${server.port:8080}}
  client:
    serviceUrl:
      defaultZone: ${vcap.services.eureka-service.credentials.uri:http://127.0.0.1:8761}/eureka/

Eureka server:

This is a Spring boot app with an annotation @EnableEurekaServer. 

Code:

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryServerApplication {

 public static void main(String[] args) {
  SpringApplication.run(DiscoveryServerApplication.class, args);
 }
}

Configuration:
We can configure this application to be a Eureka server in application.yml

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/



API Gateway:



A microservices based application can have multiple clients (for eg, Web, Mobile, Partner integrations etc). Note that each of the individual microservices can evolve on its own and can deploy with different versions for different clients. In such scenarios, it will be necessary to provide a centralized interface which will perform the routing and tranformation services required.

An API Gateway does exactly this. Spring cloud Netfix provides a library Zuul which acts as an API gateway.

To create an API gateway server, use the annotation @EnableZuulProxy on the spring boot app. Note that this application should also register itself with Eureka, since it has to locate other services which it would forward to.

Code:
@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy

public class EdgeServerApplication {
}


Configuration:
The following configs in application.yml will allow it to register with Eureka and add routing logic for request forwarding.

eureka:
  instance:
    hostname: localhost
    leaseRenewalIntervalInSeconds: 10
    metadataMap:
      instanceId: ${vcap.application.instance_id:${spring.application.name}:${server.port:8080}}
  client:
    serviceUrl:
      defaultZone: ${vcap.services.eureka-service.credentials.uri:http://127.0.0.1:8761}/eureka/ 
   
zuul:
  routes:    
    profile-skills: 
      path: /**/skill/**
      stripprefix: false
      serviceId: profile-skills
      
    profile-summary: 
      path: /**/summary/**
      stripprefix: false
      serviceId: profile-details



Note:

  • The angular web application will point to the edge server. So all microservice invocations from the angular app goes via edge server.
  • For eg, if the angular web app has to invoke profile-skills service, then it would invoke /<edge-server-host-url>/<something>/skill/<something>/.
    • The edge server would then apply the rule as in the above configuration, and forward it to the "profile-skills" microservice (It would use the host name map obtained from Eureka to resolve the actual host url)).
  • The "stripprefix" attribute in the above configuration would retain the prefix part of the url before the routing path (like before /skill or before /summary)


Circuit Breaker:



When we have a huge number of microservices (which we will, in a typical complex application), it is necessary for the services to be fault tolerant. Since it is common for services to fail in a cloud environment using commodity hardware, we need to design our services in a fault tolerant way.

Circuit Breaker is a pattern used in Microservices which will work pretty much like a Circuit breaker in electric circuits. When a service fails, it creates an open circuit, breaking the flow and to fix that temporarily, we define an alternative service implementation, which will kick in and close the circuit.

In our use case, Profile-Details service invokes Profile-Recommendation service. When Profile-Recommendation service fails, an alternative implementation kicks in which will just return a dummy default recommendation, so that the entire flow is not broken. Once the Profile-Recommendation service is back online, normal services will resume.


We use FeignClient (which is a Client side load balancer) along with Hysterix.

Code:
In the profile-details service, add this annotation to the spring boot app:

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients

public class ProfileDetailsApplication {
}


Now to invoke the Profile-Recommendation service, we need not know the host url of the Profile-Recommendation service service. We just need to know the service name (configured in bootstrap.yml of the Profile-Recommendation service). Since we use Eureka, the service will resolve the host url.


Create an interface as below to invoke an API in Profile-Recommendation service.

@FeignClient(name = "profile-recommendation" , fallback = RecommendationClientFallback.class)
public interface RecommendationFeignClient {
 @RequestMapping(method = RequestMethod.GET, value = "/api/profile/{userId}/recommendation")
 List getRecommendations(@PathVariable("userId") String userId);
}

The value of the name attribute is the service name which we are calling. The value of the  fallback attribute is an alternative implementation, in case the actual call fails.
Invoking another microservice from one is this simple!!!
Following is the fallback implemntation:



@Component
public class RecommendationClientFallback implements RecommendationFeignClient {
    @Override
    public List getRecommendations(String userId) {
     List recommdList = new ArrayList();
     ProfileRecommendation recommend = new ProfileRecommendation();
     recommend.setRecommendationText("This is a default recommendation!");
     recommdList.add(recommend);
     return recommdList;
    }
}

Thats it!! When profile-recommendation service fails, it would invoke the fallback and would return "This is a default recommendation!".

Thursday, June 23, 2016

Microservices based Cloud Native Application - Part I

Preview:

In this post, I'm going to write about an application based on Microservices Architecture, which I presented in the Cloud Foundry Meetup. 

The entire series link could be found here:
Microservices based Cloud Native Application - Part I

Microservices based Cloud Native Application - Part II

Microservices based Cloud Native Application - Part III


Application Use Case:

The application is a miniature LinkedIn kind of application, which was built to demonstrate the key aspects of a Cloud Native application.


Application Features:


The app allows users to build their profile with experience details, their skills, certifications. 

The app allows the users to connect with other users and they can endorse skills and also recommend other users.

Application Architecture:





The application uses Microservices Architecture, which consists of loosely coupled, independently evolvable/deployable services.
It uses polyglot persistence. i.e, each service uses the right database which fits the use case. Flavors of Nosql and Sql is used in this application.

In addition to the microservices which implement functional features, the following Cloud Native application features are also implemented. 
  1. Loosely coupled bounded context
  2. Service Discovery 
  3. API Gateway
  4. Centralized Logging
  5. Fault Tolerance
These are explained in more detail in the following sections.

Microservices:

  1. Profile-Basic: This service would allow CRUD of a User Profile, like name, email etc. Since the data is well defined, we used a RDBMS (Mysql) for this.
  2. Profile-Skill: This service would allow the user to add new skills to his/her profile and allow other users to endorse skills. Due to the key/value nature of the data, we used a key/value store (Redis) for this
  3. Profile-Details: This service would allow CRUD of a Experience details/Certification Details of a user. Since the data is largely huge texts without restricted by a schema, we used a Document store (MongoDB) for this.
  4. Profile-Recommendation: This service would allow users to recommend other users. Since the data is largely huge texts without restricted by a schema, we used a Document store (MongoDB) for this. This is implemented as a separate service to demonstrate a key feature (Circuit Breaking). From a usecase perspective, creating a separate service would allow other services to look at the recommendations of a user to rate the profile (future usecase)
  5. Profile-Connection: This service would allow users to add other users to their network and also provides a listing of all the users present in their network. In order to maintain a networked graph of all the users, we used a Graph database (Neo4J) for this service
  6. Discovery-Server: This service would act a discovery service and would allow all other microservices to register to and discover each other. (including multiple instances of each service). This microservice is implemented by using the Netflix Eureka library.
  7. Edge-Server: This service would act like a API gateway, and can be used to provide authentication, routing and transformation of the requests. This service is implemented by using Netflix Zuul library
  8. Profile-web: This is an angular application, which consists of the actual client side implementation of the application.
Continued...See Part II

Thursday, May 12, 2016

Cloud Foundry Meetup - Developing Cloud Native Apps

Developing Cloud Native Apps

This post provides a brief about what to expect from the Cloud Foundry meetup session, 'Developing Cloud Native Apps on Cloud foundry' which is scheduled on June 7th, Bangalore. 

Presenters:



Rajagopalan Sundararajan, Senior Solutions Architect, EMC
Raghuveer Bhandarkar, Solutions Architect, EMC


Preview:



With the rapid adoption of Cloud computing by organizations, it has become increasingly imperative to make our application architecture, cloud enabled. This talk which is a part of the Cloud Foundry Meetup, covers Cloud native Software applications, its characteristics, challenges, supported by a demonstration through an app.

Abstract:

Cloud native Software and applications is a buzz word today and enterprises are trying to understand this concept and apply in building software systems which are “Cloud Native”. This talk captures the experiences of developing such cloud native systems for EMC's customers. We will explore the key characteristics of Cloud native software systems, key enablers required to develop them and the key challenges they bring along. 
It is followed by a demo of how Cloud foundry supports building Cloud Native software. We will touch very briefly on 12 factor app principles, Micro Services using Spring Boot, Spring Cloud among other topics.


Meetup details:



REGISTER TODAY FOR THE CLOUD FOUNDRY MEGA MEETUP! Bangalore bit.ly/1Np3acD Pune bit.ly/1TP6hK1


EMC and KPIT, in association with Cloud Foundry, present Cloud Foundry Mega Meetup 2016. Be a part of the largest Cloud Foundry event in India – join hundreds of developers, architects and business leaders at Cloud Foundry Mega Meetup to help your company deliver apps faster than ever before. Attend the event to catch industry experts present on topics that are increasingly relevant in today’s scenario and define your organization’s multi-cloud strategies.

Thursday, April 28, 2016

#GIDS16..The Great Indian Developer Summit, Bengaluru

#GIDS16

The Great Indian Developer Summit 2016, is currently under way in Bangalore, India. In this article, I'm briefly touching upon various topics discussed and my first hand experience of the event.

There were talks on what are the current industry trends what are the emerging technologies. It is no more sufficient to have the knowledge of one Stack/knowledge, but need to have knowledge of operations (read Devops), Cloud and working with huge data sets (read Big Data).

On a lighter note, the classical "Blue Pill or Green Pill" dilemna from 'The Matrix' made its appearance in a couple of presentations.

A few talks are detailed out below:


1. Microservice, Microservice, Microservice:


There were various talks on Microservice, ranging from how to go about building Microservice, to, why do we need Microservice. One of the interesting talk, went about explaining that Microservice is not a silver bullet, which should be applied to every project, but chosen wisely in order to be agile.

There was also a demo on Microservice implementation using Kubernetos, Docker on Open Shift platform. Various Netflix OSS components like Hysterix was used for Circuit Breakers and Load balancing.


2. Vert.x 3.0:


An amazing ecosystem on JVM which supports building reactive, asynchronous applications using langauges like Scala, Groovy, Javascript etc. Scott Davis demonstrated a hands on example of how we can build a http server quickly and deploy on multiple server instances, all of which would communicate to each other using UDP protocol and would act as a cluster. 
One statement which summed all of that, was "Vert.X is the Docker for JVM".
Man, believe me, amazingly cool stuff!

Vert.x is going to be the next big thing!!!.


3. Are we computer scientists? Nah...we are story tellers:


Scott gave an amazing presentation on how we, engineers are more like story tellers, not being just source code producing machine...but rather one who build applications. He also stressed on the need to write Unit tests to validate what we code, drawing analogy to scientists who validate their theory. Spot on!
An interesting point was made about how a user story in Agile (the good old, "As a ... I want ... so that ..."), does not describe what is needed most of the time. He made a case to rather use Hypothesis instead of User Story.


4. Java is NOT DEAD! It's Maturing!!!


Bob McWhirter's presentation was received with thunderous applause and cheers from the audience, when he went to say that, Java is not dead, but is getting wiser and mature with age. The enhancements in Java 8, Java EE 7, Vert.x and Groovy/Scala hold much promise for Java and he backed it up with statistics to indicate the trend that more and more applications were getting written in Java. Even if the application is not written Java as a langauge itself, JVM will be in the picture somewhere.

Long live Java!!!


5. Groovy:


Paul King introduced Groovy with all its features including Domain Specific langauge (DSL),Closures, functional programming aspects of Groovy. He also explained on how Groovy eliminates the pain points of Java and is piggy backing on the strength of Java. 



6. Lean Engineering:


Bill Scott shared his experience of how bringing the Product engineering development team and the designers (and customers sometimes)together in a room, facilitating collaboration and experimentation lead to building world class product features at Paypal.


7. Mobile backend as a Service (MBAAS):


Mike Keith explained the various architectural components required to build Mobile serving backend system. It included how the ecosystem would interact with legacy enterprise applications, also touching upon, security aspects like OAuth/SAML. Various API management tools like APIGEE were introduced. The challenges of identity management in the wake of different business scenarios like 'Business to Customer (B2C)' or 'Business to Employee' (B2E) were presented.


8. Angular 2, Java Lambda expressions:



There were talks on how Angular 2 is radically different from Angular 1 with introductions to Web Socket programming. Java Lambda expressions and Streams with a well drafted lessons learned from using these technologies was presented.

Overall, it was very exciting and the next couple of years appear to be the most turbulent years in technology, with a huge influx of stacks.

After all, Albert Einstein could not be more apt when he said "Once you stop learning, you start dying".

Monday, January 18, 2016

Using Maven Dependency Tree in troubleshooting

In this blog post, I'm going to explain how to check the  maven dependency tree and how it is useful in troubleshooting certain run time exception which we might encounter while running Spring applications.


Maven Dependency resolution:


When you use maven, it uses its own dependency resolution mechanism to decide which jar to use when there is a conflict.
Lets say you are using two dependencies, each of which have the same jar. Now, unless you pay attention, maven might end up using the incorrect jar version, and you will start getting exceptions like 'ClassNotFound' or 'NoSuchMethodError'.

So, it helps to know which version of jar has maven resolved and is added to our application. This is where the Maven dependency tree helps us.


Maven Dependency Tree:


A simple command, "mvn dependency:tree -Dverbose " will print out the entire maven dependency tree.

I will show with an example how to analyze this tree. Since is based on a issue I recently faced, I hope it will serve best to understand it better.


Practical Example:


Lets say we have a maven project for a very basic java spring project for illustration purpose.

Lets use power Mockito and Mockito core for unit test cases and the dependency is:

<dependency>
      <groupId>org.powermock</groupId>
      <artifactId>powermock-module-junit4</artifactId>
      <version>1.5.6</version>
      <scope>test</scope>
 </dependency>
   
 <dependency>
      <groupId>org.powermock</groupId>
      <artifactId>powermock-api-mockito</artifactId>
      <version>1.5.6</version>
      <scope>test</scope>

 </dependency>

Also add a dependency on Mockito core:
<dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>1.9.5</version>
      <scope>test</scope>
</dependency>

Add a dependency for spring mockito annotations:
<dependency>
<groupId>org.kubek2k</groupId>
<artifactId>springockito-annotations</artifactId>
<version>${springockito-annotations-version}</version>
<scope>test</scope>

</dependency>

Create a Junit Test case as below:

@RunWith(PowerMockRunner.class)
public class MyLogicTest{

@Mock
private MyService service;

@Test
public void testMyMethod(){

}


}

Run the test case. You would get an error:

java.lang.NoSuchMethodError: org.mockito.internal.creation.MockSettingsImpl.setMockName(Lorg/mockito/mock/MockName;)Lorg/mockito/internal/creation/settings/CreationSettings;

You might have encountered similar NoSuchMethodError or ClassNotFound exceptions while running Spring applications. Most probably, those errors would have occurred because, the same jar (with different versions) would have been present in different dependencies. This might cause Spring to pick an incorrect version of the jar and thereby causing run time exceptions.

What is the actual issue here?


power-mockito 1.9.5 is declared in pom, but somehow older version is being used and causing problem. We need to find out which dependency is the culprit and causing this issue.

How to resolve this issue?



Run the below command which would display the tree structure of dependencies which maven has resolved.

mvn dependency:tree -Dverbose 

It would show a tree structure with parent and child nodes.
Search for "org.powermock" and you would notice the below:

[INFO] +- org.powermock:powermock-api-mockito:jar:1.5.6:test
[INFO] |  +- (org.mockito:mockito-all:jar:1.9.5:test - omitted for conflict with 1.9.0)

It is clearly saying it is using 1.9.0 instead of 1.9.5
Now, we need to find out which dependency is using 1.9.0. So search for 1.9.0 in the above output. You would find:

[INFO] +- org.kubek2k:springockito-annotations:jar:1.0.9:test
[INFO] |  \- org.mockito:mockito-all:jar:1.9.0:test

So it is clear that "org.kubek2k:springockito" is the culprit and it is adding the older version of power-mockito. To fix this issue, we need to tell maven to ignore or exclude "power-mockito" jar from "org.kubek2k:springockito".

How to instruct Maven to use the right jar version?


Use exclusions!!
So go to the pom and modify to add a exclusions element as below:

<dependency>
<groupId>org.kubek2k</groupId>
<artifactId>springockito-annotations</artifactId>
<version>${springockito-annotations-version}</version>
<scope>test</scope>
<exclusions>
<exclusion>
 <groupId>org.mockito</groupId> 
 <artifactId>mockito-all</artifactId>
</exclusion>
</exclusions>
</dependency>

That's it!!. This should resolve the run time issue and the test case should start working. I hope this post was helpful in understanding the significance of Maven Dependency trees.