Tuesday, 17 February 2015

Simple RabbitMQ Sender and Listener

Assuming RabbitMQ server is running on localhost, and requires user-id and password to connect.
This example  has

  • a sender which sends the message (line) typed in the command line, to a specific exchange and a routing key
  • Reciever, which listens to the message send to a specific queue

The working maven project for this example can be downloaded here.

SENDER


import java.io.IOException;
import java.util.Scanner;

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;

public class RabbitSender {

 private static final String TEST_EXCHANGE = "test-exchange";
 private static final String TEST_QUEUE = "test-queue";

 public static void main(String args[]) throws Exception {
  Connection connection = null;
  Channel channel = null;
  try (Scanner sc = new Scanner(System.in)) {
   while (true) {
    
    System.out.print("--Type in the message to send--");
    String s = sc.nextLine();
    
    ConnectionFactory factory = new ConnectionFactory();
    factory.setUri("amqp://localhost");
    factory.setUsername("userid");
    factory.setPassword("password");

    System.out.println("-- Creating Connection--");
    connection = factory.newConnection();

    System.out.println("-- Creating channel--");
    channel = connection.createChannel();

    System.out.println("-- Creating Exchange--");
    channel.exchangeDeclare(TEST_EXCHANGE, "topic");

    System.out.println("-- Creating queue--");
    channel.queueDeclare(TEST_QUEUE, false, false, false, null);
    channel.queueBind(TEST_QUEUE, TEST_EXCHANGE, "test.#");

    System.out.println("-- Sending Message--");
    channel.basicPublish(TEST_EXCHANGE, "test.route", null,
      s.getBytes());
    System.out.println("-- Message sent to--" + TEST_EXCHANGE
      + " with routing key:" + "test.route");
   }
  } catch (Exception e) {
   e.printStackTrace();
  } finally {
   if (channel != null)
    channel.close();
   if (connection != null)
    connection.close();
  }
 }
}

RECEIVER
import java.util.Scanner;

import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.QueueingConsumer;
import com.rabbitmq.client.QueueingConsumer.Delivery;

public class RabbitListener {

 private static final String TEST_EXCHANGE = "test-exchange";
 private static final String TEST_QUEUE = "test-queue";

 public static void main(String args[]) throws Exception {
  Connection connection = null;
  Channel channel = null;
  try {
   ConnectionFactory factory = new ConnectionFactory();
   factory.setUri("amqp://localhost");
   factory.setUsername("userid");
   factory.setPassword("password");

   System.out.println("-- Creating Connection--");
   connection = factory.newConnection();

   System.out.println("-- Creating channel--");
   channel = connection.createChannel();

   QueueingConsumer consumer = new QueueingConsumer(channel);

   channel.basicConsume(TEST_QUEUE, true, consumer);

   System.out.println("-- Waiting for message--");
   Delivery delivery = null;
   while ((delivery = consumer.nextDelivery()) != null) {
    System.out.println("--Message received --"
      + new String(delivery.getBody()));
   }

  } catch (Exception e) {
   e.printStackTrace();
  } finally {
   if (channel != null)
    channel.close();
   if (connection != null)
    connection.close();
  }
 }
}

Monday, 16 February 2015

ACID vs CAP

ACID - set of properties that define the quality of database transactions.



CAP - Consistency, Availability, Partition Tolerance

3 basic properties or requirements for distributed System. Assuming a Distributed Database system, where data is distributed/partitioned across nodes/servers


All 3 properties is hard to achieve. So at least a combination 2 properties from the above is acceptable.





Saturday, 14 February 2015

Setup ELK on Linux (Elasticsearch 1.4.2 /Logstash 1.4.2/Kibana 3.1.2)

Below are instructions to setup ELK stack, in 8 simple steps.

1. Install JDK Httpd
2. Download and extract necessary components
3. Configure and start httpd and elasticsearch servers
3. Verify httpd,elasticsearch
4. Setup Kibana on HTTPD path.
5. Test Kibana and get it working with few changes to elasticsearch.
6. Add logstash configuration
7. Run logstash to push to Elasticsearch.
8. Advanced Logstash configurations to parse access_log.




Install JDK and Httpd

Make sure appropriate yum repo's are updated.

yum install java-1.7.0-openjdk
yum install httpd

Disable Firewall 
service iptables stop


Downloads:




Copy the files to a linux machine to /root folder

ElasticSearch: unzip elasticsearch-1.4.2.zip
Kibana: tar -zxvf kibana-3.1.2.tar
Logstash: tar -zxvf logstash-1.4.2.tar
Head Plugin: elasticsearch-1.4.2/bin/plugin --url file:///root/elasticsearch-head-master.zip --install mobz/elasticsearch-head

Configure Elasticsaerch

vi /root/elasticsearch-1.4.2/config/elasticsearch.yml
uncomment cluster-name and give a name. don't use the default

################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#

cluster.name: vidhya-elk



Start Servers

service httpd restart


Verify the server Installation

Httpd: http://<IP/hostname>




Start Elasticsearch
/root/elasticsearch-1.4.2/bin/elasticsearch



Verify Elasticsearch :   http://<ip/hostname>:9200/

Verify Elasticsearch head :  http://<ip/hostname>:9200/_plugin/head



Kibana Setup


mkdir /var/www/kibana3
cp -r /root/kibana-3.1.2/*   /var/www/kibana3/

vi /etc/httpd/conf/httpd.conf

alias /kibana /var/www/kibana3
<Directory /var/www/kibana3>
  AllowOverride All
  Require all granted
</Directory>

Verify Kibana : 





To fix this error, changes are required in elasticsearch.yml, by adding the below mentioned line at the end of the file.

vi /root/elasticsearch-1.4.2/config/elasticsearch.yml

http.cors.enabled: true

Restart elasticsearch





Logstash Setup

Create a configuration file:
vi  /root/logstash-1.4.2/conf/es.conf
input { stdin { }}
output {
        stdout { }
        elasticsearch {
                bind_host => "127.0.0.1"
                protocol => http
        }
}

The above configuration takes any standard input and publishes to elasticsearch as well as prints it on the command line.

Verify Logstash

/root/logstash-1.4.2/bin/logstash agent -f /root/logstash-1.4.2/conf/es.conf --configtest
-- This verifies the configuration file
./logstash-1.4.2/bin/logstash agent -f logstash-1.4.2/conf/es.conf
-- This pushes whatever is typed on the command-line to elasticsearch, you can see indexes getting created using the elasticsearch head plugin.
Advanced Logstash configuration


1. Parse the access_log and publish to elasticsearch for log analysis 

vi  /root/logstash-1.4.2/conf/access_log.conf
input {
  file {
    path => "/var/log/httpd/access_log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    type => "apache-access"
  }
}

output {
        stdout { }
        elasticsearch {
              bind_host => "127.0.0.1"
              protocol => http
        }
}
2. Parse the access_log and publish to elasticsearch for log analysis, custom grok filters 

vi  /root/logstash-1.4.2/conf/access_grok_log.conf

input {
  file {
    path => "/root/log/access_log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    type => "apache-access"
  }
}
filter {
  if ([message] =~ "^::") {
      drop {}
  }
  grok {
    match => ["%{COMBINEDAPACHELOG}"]
  }
  date {
    match => [ "timestamp" ,"dd/MMM/yyyy:HH:mm:ss Z"]
  }
}

output {
        stdout { }
        elasticsearch {
                bind_host => "127.0.0.1"
                protocol => http
        }
}








Friday, 30 January 2015

AOP introduction

Aspects are generic feature that cuts across classes/methods/modules etc. It smells the execution of matching methods that it is expected to handle and executes as per different strategies that are defined later. The actual classes/methods are not aware of the logic being applied.

The cross cutting solutions may be exception handling, activity logging, transaction management, etc.
These can be defined as aspects which can work based on some configuration defined/or can be applied for any code matching its criteria.

Where are Aspects Used:

  • In legacy applications, which needs adapters across classes, which cannot be touched. Say, the tracing was missing in a legacy application. To get the tracing plugged in, we may need every class/method to be modified to add entering/existing.  Instead - A trace Aspect would do that by trapping all/certain method invocation, and introduce tracing before. Execute method using reflection. then add after tracing.
  •  Generic logic that needs to be handled in a detached class, keeping the intercepted classes unaware of

Crosscutting Across modules/classes/etc/,


How does Aspects work.

The aspects and their pointcuts are preloaded. When a method is invoked, The call goes to a Proxy, which hands off the method to be executed to the aspects, which is applicable to this method based on the pointcuts. Aspect chaining occurs and the actual method is executed at the necessary time based on the aspect ordering.



Advice chaining:  As it happens with spring 2.0


AOP Concepts in Spring:

In Spring AOP, aspects are implemented using regular classes (the schema-based approach) or regular classes annotated with the @Aspect annotation (@AspectJ style).

  • Join point: A point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution.
  • Advice: Action taken by an aspect at a particular join point. Different types of advice include "around," "before" and "after" advice.
  • Pointcut: A predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name).
  • Introduction: (Also known as an inter-type declaration). Declaring additional methods or fields on behalf of a type. Spring AOP allows you to introduce new interfaces (and a corresponding implementation) to any proxied object. For example, you could use an introduction to make a bean implement an IsModified interface, to simplify caching.
  • Target object: Object being advised by one or more aspects. Also referred to as the advised object. Since Spring AOP is implemented using runtime proxies, this object will always be a proxied object.
  • AOP proxy: An object created by the AOP framework in order to implement the aspect contracts (advise method executions and so on). In the Spring Framework, an AOP proxy will be a JDK dynamic proxy or a CGLIB proxy. Proxy creation is transparent to users of the schema-based and @AspectJ styles of aspect declaration introduced in Spring 2.0.
  • Weaving: Linking aspects with other application types or objects to create an advised object. This can be done at compile time (using the AspectJ compiler, for example), load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime.
Types of Advices

  • @Before : Execute before a join point
  • @After (finally) : Execute after join point regardless of sucessul or exception case
  • @AfterReturning : Execute after join point on successful method execution
  • @AfterThrowing : Execute after join point on execption
  • @Around : Surrounds a joinpoint. The aspect will have complete control before during and after method invocation
Proxy: 

Spring using Java Dynamic proxies for AOP Proxies. CGLib is used when the objects do not implement an interfaces

How to Allow Spring to load @Aspect annotation

<aop:aspectj-autoproxy/>
 
 

Pointcut Expression

This expression defines the matching patter, to which these advices should be applied.

Simple syntax is defined as follows

 

Simple example of defining aspect in Spring.


Aspect



 
package org.first.aspects;

import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;

@Aspect
public class FirstAspect
{

    @Before("execution(* org.first.*.*(..))")
    public void before()
    {
        System.out.println("---Logging before --");
    }

    @After("execution(* org.first.*.*(..))")
    public void after()
    {
        System.out.println("---Logging After--");
    }
}
 
 
 

Bean Class



 
package org.first;

public class FirstServiceImpl //implements FirstService
{

    public FirstServiceImpl()
    {
    }

    public String printMessage()
    {
        final String s = "--printMessage()---";
        System.out.println(s);
        return s;

    }

    public String messagePrint()
    {
        final String s = "--messagePrint()---";
        System.out.println(s);
        return s;

    }

}


beans.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans 
  http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
  http://www.springframework.org/schema/aop
  http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"> 
 
    <!--Aspect definitions are defined in the class --> 
    <aop:aspectj-autoproxy /> 
 
    <!-- Without this the aspects are not instantiated -->  
    <bean id="aspect" class="org.first.aspects.FirstAspect" />
</beans>

Dependencies


 
<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-context</artifactId>
 <version>${spring-framework.version}</version>
</dependency>
<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-aop</artifactId>
 <version>${spring-framework.version}</version>
</dependency>

<!--  @Aspect Annotation -->
<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-aspects</artifactId>
 <version>${spring-framework.version}</version>
</dependency> 


Test code

 
import org.first.FirstService;
import org.junit.Test;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class AOPTest
{

    ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml");

    @Test
    public void testContext()
    {
        final FirstService service = (FirstService) ctx.getBean("firstService");
        service.printMessage();

    }
}


Output



References :   http://docs.spring.io/spring/docs/2.0.x/reference/aop.html
 
 

HIbernate : How to get more details on "Errors in Named Queries"


ERROR Seen when there is a problem in the named query 

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'Dao': Injection of resource dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'DaoImpl': Injection of resource dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in ServletContext resource [/WEB-INF/spring/sessionfactory-cfg.xml]: Invocation of init method failed; nested exception is org.hibernate.HibernateException: Errors in named queries: select_records_with_auth


Just gives you a high level error.

To get more details change the loglevel. (Below example is of log4j)

<logger name='org.hibernate' level='DEBUG' additivity="false">
        <appender-ref ref='DebugLogFile'/>
</logger>

you will get indepth details as shown below

2015-01-30 14:46:29.864 UTC,ERROR,***-web,,org.hibernate.hql.PARSER,null,RMI TCP Connection(3)-127.0.0.1,line 2:10: expecting IDENT, found '*'
2015-01-30 14:46:29.874 UTC,DEBUG,index-web,,org.hibernate.hql.ast.ErrorCounter,null,RMI TCP Connection(3)-127.0.0.1,line 2:10: expecting IDENT, found '*'
antlr.MismatchedTokenException: expecting IDENT, found '*'
        at antlr.Parser.match(Parser.java:211) ~[antlr.jar:na]
        at org.hibernate.hql.antlr.HqlBaseParser.identifier(HqlBaseParser.java:1612) [hibernate-core.jar:3.6.10.Final]
        at org.hibernate.hql.antlr.HqlBaseParser.atom(HqlBaseParser.java:3691) [hibernate-core.jar:3.6.10.Final]


Saturday, 21 June 2014

[Solution] .bat file don't proceed after mvn clean install

Executing .bat does not proceed after mvn clean install.

mvn -o clean install
echo $classpath  -- This never executes

Solution :

call mvn -o clean install 
echo $classpath  

Reason: .bat file, halts after calling another program.  CALL command is invoke another program from the current and proceed further

Reference: http://www.computerhope.com/call.htm