2016. december 19., hétfő

Using logging in your unit test

When running unit test, you are not supposed to be checking the logs, generated by the test framework. One of the first principles of the test driving development paradigm is, that you should be able to run your tests automatically, without any developer interaction. You also should not be forced to check any output besides the running result. Also you should never check the log, to be able to decide if running the test was successful.

So long the theory.

In fact, sometimes it is very helpful, if you can analyse logging from your unit test. As you are working test driven, you use the unit test runner also for developing the code. In order to get a closer view to the process you are implementing, during the development phase, it can spare a lots of time and debugging, if you can have a look at the logs.

The problem is, that you do not want the change the log level in your code, to let JUnit process to log your debug or trace level logs. As JUnit is running with default INFO level (depending on the actual logging implementation you are using), you need to find a way to change the logging level fast.

Change the logging level in Eclipse

You can define a custom log configuration file for your JUnit runner

-Djava.util.logging.config.file=scr/test/resources/logging.properties












To define a custom log settings file, you need to open the actual run configuration, and add the path of the file as program argument.

Although this method is handy to use, when you have a central run configuration, and you want to run it several time during your development, in most cases it is inconvenient. You need to define the same run-time argument for all run or debug configuration, which makes the development slow.

Set log level programmatically    

Thanks to the API of logging frameworks, it is possible to set the log level dynamically in your code. The @Before method is just a right place for it. Unfortunately you need to remember slightly different API calls for different logging frameworks, but if you stick to one, it is not a huge problem.

Here you can see implementation for Log4J and Slf4j, the two most used by me.



As you can see, once you got a reference to the root logger, you are free to change any settings the given API allows to you.

Please consider, that logging on such a wide scale is usually unnecessary after the development phase. I usually use this possibility only when creating new features, and remove it or comment it out after the code works as desired.



Custom toString builder for Eclipse




If you are not satisfied the way, how Eclipse generates the toString method for your classes, and you are not so lucky to use the Lombok project, you have the possibility to set custom toString builder for Eclipse.

In the Generate toString dialog you can configure the code style.





By clicking on the Configure button, you can open the Configure Custom toString Builder dialog.




Here you can define the class used for generating the toString method. In the above example I use the ToStringBuilder from Apache Commons (org.apache.commons.lang.builder.ToStringBuilder), but you can implement your own builder class, that meets the requirements.

The generated method looks like this:

    @Override
    public String toString() {
        ToStringBuilder builder = new ToStringBuilder(this);
        builder.append("period", period).append("orderPosition", orderPosition).append("selectable", selectable).append("name_e", name_e);
        return builder.toString();
    }


As I wanted to generate toString in XML and JSON format, I have implemented my own tostring builder classes.

My builders have the advantage, that they do not use reflection, to generate the String representation of the classes. The disadvantage is, that they work only with relative simple classes. But as I try to keep my classes clean and simple, it is not a real problem for my projects.

It is relative easy to implement a class, which satisfies the requirement of a custom Eclipse ToString() Builder. Here is an example, how I created a JSON-like toString() builder class:



Please note, that in order for Eclipse to use the fluent API, for calling the append() method, I needed to implement the Builder Interface.

The configuration of the class to be used as a builder looks like this:




The generated method looks like this:









2016. december 6., kedd

Use standard Java APIs instead of reimplementing it


While we are joyfully solving our tasks, we tend to forget, how much help we can get form standard Java API. Before implementing something very exciting, elegant and nice, it is always a useful tactic to stop ourselves, take a big breath, and check if the function is already available somewhere.

While making code reviews for my colleagues, I keep finding methods and functions, that are long time in Java, or in some commonly used utility library. It is embarrassing, while using the existing solution you could have spare the time for recreating, testing and maintaining your custom solution.

I believe, that knowing the basic Java functionality tells a lot about the programmer, and the way of coding, he works in the daily life.

Advantages of using standard Java API:

  • hundred percent tested
  • stable, long term available code 
  • in most cases, the existing solution is a way more elegant and effective implementation, than the one you come up with

So my advice is, try to find the function, you need, first in the default Java API, then in a commonly used library. Implement your solution, only if you are really sure, that it has not been done yet.

Sources to be checked:



Here are some examples of using Java API, instead of reimplement them:

Null-safe compare

Comparing two objects, considering that they both can be null, is a tricky one. I have seen some elegant and some less elegant implementation of it. But the best one, I saw is in the Objects class of Java ;)

Null-safe toString

If you use code like this

return delta == null ? "" : delta.toString()

you should consider using Objects.toString() instead. It provides a null-safe conversion, and avoids returning Strings, containing "null".

Do not create empty collections

Use built in constants in java.util.Collections

  • EMPTY_SET
  • EMPTY_LIST
  • EMPTY_MAP

Use already existing exceptions

A huge amount of predefined exceptions exists already in default Java packages. Why to implement your own exceptions, when we have so many, like:
  • java.lang.IllegalStateException
  • java.lang.IllegalArgumentException
  • java.util.MissingResourceException
  • java.util.IllegalFormatException

Array operations

Programmers tend to implement operations on arrays, only because ArrayUtils form commons does not contain the required function. They forget, that some of them is not implemented on purpose, while Arrays form java.util already contains them. Most useful ones, for my point of view are:

  • copyOf
  • fill
  • toString
  • deepEquals

Using builder for toString

Not everyone is so lucky, to be allowed to use the Lombok project for dynamic code generation, via annotations. Without a proper framework developers tend to let the IDE generate the toString method for classes. The problem is with the maintainability. Namely, it is often forgotten to regenerate the method, if the class changes.

The solution is the ToStringBuilder form Apache Commons. You have the possibility to generate toString via reflection.

At the end of your development phase, when it is probably, that your code will seldom be changed, you can generate a toString, without reflection. For this purpose, you can configure Eclipse to use ToStringBuilder instead of the built-in generator.









2016. november 22., kedd

UTF-8 character problem with JIRA Application link

I have created a JIRA, and a Confluence add-on. They communicated with each other via REST API, using the Application link interface. I needed to call a REST function, implemented in my own JIRA add-on, from the Confluence add-on. 


It worked fine until any of the REST parameters contains a non ASCII character. In case of the parameter contains a German umlaut (öäüß), I got question marks at the JIRA side.


I wanted to prove, that the problem is not with my JIRA plugin, so I tried to call standard, built in REST functions via application link as well. The example code shows, how to call the REST API in order to retrieve project information.


public JsonJiraProject findJiraProject(String projectKey) {

    ApplicationLink appLink = appLinkService.getPrimaryApplicationLink(JiraApplicationType.class);
    ApplicationLinkRequestFactory factory = appLink.createAuthenticatedRequestFactory();
 
    ApplicationLinkRequest aplrq = factory.createRequest(Request.MethodType.GET, "http://localhost:2990/jira/rest/api/2/project/");

    aplrq.addRequestParameters("projectKey", projectKey);
    aplrq.setSoTimeout(APP_LINK_TIME_OUT);
    String jiraResponse = aplrq.execute();
    ...




I found, that the same problem occurs with basic, built-in JIRA REST functions as well. JIRA application link does not support UTF-8 parameters.

What I tried to get UTF-8 parameters working

  • I tried to set header of ApplicationLinkRequest, but it does not help.
    alr.addHeader("Content-Type", "application/json;charset=UTF-8");
  • I tried to define parameter directly at the end of the URL, without adding it as request parameter.
  • I tried to URL encode the URL with the parameter
  • I tried to call the function direct from a client (RESTClient add-on in Firefox), and I got the correct answer.

So I am sure, that the problem is with the ApplicationLinkRequest implementation, and it can not be solved by setting any option or header parameter.


As workaround I found, that I need to convert my input fields to ISO8859-1 character set. It does not cause any problem, and covers the characters (German umlauts), I needed to use. 


I created a utility class with following code to convert my input values:



public static String convertToIso8859(final String input) {
    if (StringUtils.isBlank(input)) {
        return "";
    }
    try {
        return new String(input.getBytes("UTF-8"), "ISO8859_1");
    } catch (UnsupportedEncodingException e) {
        return "";
    }
}

At the same time I restricted the allowed characters for my output fields, to avoid typing characters, outside of the ISO8859-1 set.


2016. november 21., hétfő

Multiton design pattern




The Multiton pattern is a design pattern similar to the singleton, which allows only one instance of a class to be created. The Multiton pattern expands on the singleton concept to manage a map of named instances as key-value pairs.

Rather than having a single instance per virtual machine, the Multiton pattern instead ensures a single instance per key. The key must not be a String, it is handy to define it as an enumeration. Using enumeration as Multiton key limits the number of possible elements in the hash, therefore eliminates the greatest risk of using the Multiton pattern, namely the memory leak. 

The pattern can be used when
  • creating an object requires more effort, than storing it.
  • some, but not too many instances of the given class should exists in the application
  • instances should be created at the first demand
  • instances should be accessible centrally
Advantages
  • The pattern simplifies retrieval of shared objects in an application. Instances of the class should be accessed globally, via static getInstance method of the class.
  • Makes it possible to store structures with high creation cost in the memory. If creation of the structure requires data reading, and high amount of calculation, it worth not to throw them away after once created, but store them in a Multiton structure.

  • Possible examples can be like representation of domain specific configuration data

Disadvantages
  • Memory leak. As object instances once created, are constantly referenced from the Multiton map, they can not be subject of garbage collection, until the end of the application. Therefore I would not recommend using Multiton for a set of objects with dynamic number of elements. 
      
  • as with many creational pattern, Multiton introduces a global state, therefore makes it more complicated to test the application. 

  • Makes dependency handling of related unit tests more complicated. If the constructor is private, and can be used only from inside the class, a more sophisticated mocking is necessary.

Examples of usage

  • Storing database connection properties in a Multiton structure. If a predefined number of connections are used for the enterprise application, with well separated domains (Like HR database, Inventory database, Customer database, etc.) all connection information can be stored in a ConnectonProperties class. Instead of creating a ConnectonProperties instance every time, when a connection pool needs to be created, or a new element needs to be added to the pool, they can be stored in a Multiton structure. 
    A ConnectonProperties will be read only on demand, and stored from that point of time on, till end of the application.
  • When implementing a GUI table, with requirement of showing detail view of the selected element, you can store detailed record information in a Multitone structure. The Multiton will be contain a dynamically growing set of data, therefore it is afforded to implement a feature to remove long time not used elements from the structure. 


Sources
http://www.blackwasp.co.uk/Multiton.aspx
https://en.wikipedia.org/wiki/Multiton_pattern
http://gen5.info/q/2008/07/25/the-multiton-design-pattern




2016. november 17., csütörtök

Using JIRA REST API via Restclient.



While implementing JIRA plugins, I recently had to use JIRA REST API to create and modify issues.

As I did not want to install any desktop application, I decided to use the RESTclient plug-in for Firefox.

It provides a very convenient way to deal with REST requests, and makes it possible to try out your own REST functions fast.

I however faced some problems during setting up the client, so I list here, how to do it properly, in order to avoid wasting my time at the next ocassion.
  • Install the RESTclient add-on for Firefox, from the Firefox application store
  • Start the plugin
  • Set up the request:
  • Method: POST
  • URL: http://localhost:2990/jira/rest/api/2/issue
  • Define authentication by adding a new authentication element to the request. Select basic authentication, and set username and password
  • Define content type by adding a new header element: Content-Type:application/json
  • Define user agent: User-Agent:adminIt is necessary, for avoid JIRA error message: "403 Forbidden". If it is not defined, you get error from the REST API, and following log entry is shown in JIRA log:
    "XSRF checks failed for request"














  • Define message body

{
    "fields": {
       "project":
       { 
          "key": "TEST"
       },
       "summary": "REST ye merry gentleme.",
       "description": "Creating of an issue using project keys and issue type names using the REST API",
       "issuetype": {
          "name": "Bug"
       }
   }
}
  • Send the REST command.

For REST commands visit the JIRA documentation.


2016. november 9., szerda

Lazy initialization design pattern


Creational design pattern

"... lazy initialization is the tactic of delaying the creation of an object, the calculation of a value, or some other expensive process until the first time it is needed."

Lazy init can be triggered by

  • calling the assessor method of an object
  • started at a low priority background thread at application startup
  • started by a job

It is a good tactic to have a central class, responsible for lazy initialized objects in the application. It is usually a singleton class, that creates and stores the object target for the lazy init.

Advantages

  • Makes it possible to initialize an object just when it is required by a client.
  • Makes startup time of the application shorter.
  • Can save resources, like database connections, opened files or memory.


Risks

  • In multi-threading environment a special care needs to be taken in order to avoid multiple instantiation of the object. It can be inconvenient, or can cause a serious performance problems in the system. Therefore can be used for attacking your system.
  • As for all optimization, it is true, that you need to implement it, if it solves a real performance problem. Try to avoid eager optimization, before even facing the problem.
  • Like it is easily to experiment with JPA, you need to be careful when defining the amount of objects, created by lazy initialization or amount of effort to be spent on it. You can easily load to much at the same time.    
  • During debugging, the lazy init can take place, triggered by the debugger. Therefore, the application can behave different than under normal circumstances.     


Real word examples

  • JPA lazy initialization for collection of related objects.
  • Database pools or any resource pool, where a resource is complicated and time consuming to be build.
  • LoadingCache in Google Guava project. The cache loads a new element, after it was first requested, and not found among the cached objects. 


My implementations

  1. I have created a small application, that provided data to our company Avatar server. The data shown in Avatar was synchronized with different user directories of the company, like Active Directory, JIRA Crowd, Corparate Directory and LDIF. The domain object called User contained two profile pictures, one with small and another one with big resolution. The small one was used in user lists, and in some other places. As the big one was used relatively rare, it was not loaded into the User object until it was really needed.  
    User objects were stored in a memory cache, with limited capacity, so not more needed Users were purged from the memory after defined time of inactivity. Also the user object and the image inside of it were both subjects of lazy initialization.  
  2. I had to check if input values from a Confluence macro contain any html tags, and remove them if so. As it was not possible either to use any Confluence utility or install the OWASP packages under the plug-in, I implemented a small class to sanitize the input strings. It used a lots of Patterns to find tags to be removed. The pattern objects were initialized at the first usage, and stored in a static list.  


Sources
https://en.wikipedia.org/wiki/Lazy_initialization
http://www.martinfowler.com/bliki/LazyInitialization.html


2016. november 7., hétfő

Code templates for Eclipse

In my daily work, I found very productive to use my own code templates for Eclipse. By the time, I have collected a useful set of templates.

You can define your own templates by opening the dialog under Preferences\Java\Editor\Templates

Defining templates is quite straightforward. If you need a detailed introduction, please refer to the following source in Tutorialspoint.

Tips and tricks


Make it accessible

As it is possible to export and import templates, I keep my templates file under version control, so I can use them easily in all of my projects.

Naming

Naming my own templates is important, while I need to select them all for export, I want to handle them together, and I want to find them during the coding really fast.
Therefore I add the prefix "my" to the template name, and use namespaces to group my templates. So, my templates are named like this:

  • my.junit.assertTrue
  • my.log.createLogger
  • my.stringUtils.notBlank


Variables

You can use variables in your template, which makes it possible to define complex expressions.
You can found a complete list of the variables found here

Imports

If you use any class in the template, that needs to be imported, define the required import statements as well. Sometime I generate a template for commonly used classes, just for spare the time of importing the needed package. Commons StringUtils is such an example for it. 


${:import(org.slf4j.Logger,org.slf4j.LoggerFactory)}
// Logger instance
private static final Logger log = LoggerFactory.getLogger(${enclosing_type}.class);${cursor}


Static imports

If you use methods, to be imported statically, you can use the staicImport keyword


${is:importStatic('org.junit.Assert.assertTrue')}

assertTrue(${cursor});


My Templates are stored under GitHub, and free to use by anyone. It contains following templates:
  • Singleton
  • Lazy initialization in getter method
  • Basic JUnit assertions with imports
  • AssertThat method calls with static exports
  • Mockito commands (when, mock, times)
  • JUnit exception testing with the @Rule annotation
  • Logging (create logger, using different log methods, logging for exception)
  • Input validation for methods, using commons validation
  • generating enumeration with a field inside 
  • Fixme and Todo templates




2016. november 4., péntek

Using reflection in setup phase of unit testing

It is  a common practice to use mock framework in the setup phase of a unit test, in order to set the internal state of the dependent objects. Sometimes it is though easier to set a value of a given field directly, in order to test the class accordingly.

For modifying field value via reflection, I used the reflection utility classes from the org.apache.commons.lang3.reflect package.

In my example I had a class, responsible to holding the application configuration, which is loaded from an XML file. The default location of the configuration file is hard-coded into the class. I wanted however test the loadConfig method, with different test configuration files, in order to check all possible configuration cases.


public class ConfigurationHolder {

    private static final String DEFAULT_CONFIGURATION_FILE_LOCATION = "./src/main/conf/cleaner.xml";

    private final String configurationFileLocation;

    private CleanerConfiguration cleanerConfiguration;

    private final static ConfigurationHolder instance = new ConfigurationHolder();

    private ConfigurationHolder() {
        this.configurationFileLocation = DEFAULT_CONFIGURATION_FILE_LOCATION;
    }

    public CleanerConfiguration getConfiguration() {
        if (cleanerConfiguration == null) {
            loadConfig();
        }

        return cleanerConfiguration;
    }

    private void loadConfig() {
        try {
            JAXBContext jaxbContext = JAXBContext.newInstance(CleanerConfiguration.class);
            Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller();
            File file = new File(configurationFileLocation);
            cleanerConfiguration = (CleanerConfiguration) jaxbUnmarshaller.unmarshal(file);
        } catch (JAXBException e) {
            throw new RuntimeException("Problem with cleaner configuration: " + e.getMessage());
        }
    }

    public static ConfigurationHolder getInstance() {
        return instance;
    }

}

With using mock framework I would have had design following possibilities:
  • creating a class, like ConfigurationFileLocationProvider or ConfigurationFileProvider to deliver the location of the file, or the file itself, and use the class in ConfigurationHolder.
    In this case I believe, I would have introduced a dependency just to make testing possible.
     
  • Passing the file location in the constructor. In this case I would have lost the possibility to use the class as singleton. 

So I decide to keep the simple implementation, and use reflection for the unit test. I decided to use the commons reflections package, while it seams to be a more sophisticated solution as the one from Spring framework.

As I wanted to set the value of a simple private field, I implemented the following method in the unit test class:


private ConfigurationHolder createConfigurationHolder(String configFileLocation) throws IllegalAccessException {
    ConfigurationHolder cr = ConfigurationHolder.getInstance();
    FieldUtils.writeField(cr, "configurationFileLocation", configFileLocation, true);
    FieldUtils.writeField(cr, "cleanerConfiguration", null, true);
    return cr;
}

In my test case I use this method to create a new instance, with the given file location

@Test
public void testGetConfiguration() throws Exception {
    String configFileLocation = "./src/test/conf/cleaner-config-test.xml";
    ConfigurationHolder cr = createConfigurationHolder(configFileLocation);

    CleanerConfiguration configuration = cr.getConfiguration();
    Assert.assertThat(configuration, notNullValue());
}


As always, you need to consider what brings you to modify the state for unit test, and which possible problems can you face with.

Advantages.
  • you can change the internal state of your class without adding unwanted dependencies to it.
  • you can test objects without setter methods like classes used for XML or JSON data modelling (with @XmlRootElement or @JsonAutoDetect annotation). These classes are generated usually without setter methods, and instances get filled up via reflection.  

Disadvantages
  • as in case of the solution, where unit testing is done via mock framework, you need to design the class carefully, to avoid restrictions regarding reflection. In this case, I had to use the configurationFileLocation property, while, the DEFAULT_CONFIGURATION_FILE_LOCATION constant can not be modified by reflection.
  • As name of the field to be modified via reflection is hard-coded into the unit test case, you need to be careful when refactoring the class. I do not see it as a big risk, however, while you need to run unit test during the refactoring anyway, so you will be forced the change the relevant test cases as well.  

Further useful methods from the FieldUtils class:
  • removeFinalModifier() makes it possible to change even final fields
  • writeStaticField() makes it possible to change even static fields



2016. szeptember 28., szerda

Using Application Link in Atlassian plugins



When implementing complex plug-ins for the Atlassian suite, it might be needed to request information from another Atlassian application. You can see examples for it, when JIRA shows Stash commits, of Confluence pages.

As all Atlassian applications has a REST API, it is possible to make JSON calls directly. Using Application Links for communication between Atlassian products however has following advantages

  • It uses central authentication of the Atlassian applications. You can take advantage of the centralized CROWD user management, and don't need to store user and password in some configuration of your plug-in. You also don't need to set authentication data in your code.
  • You use components of the common com.atlassian.applinks.api package, and it is part of the core features of all Atlassian applications. All needed components are included in com.atlassian.applinks maven group, which is automatically included by the application. Therefore you do not need to add any additional maven dependencies to your project.
  • As application liking is supported by all Atlassian products, you can reach all of them with a single programming API from your add-on.
Disadvantage of the solution is, that you are working with pure JSON responses, so in order to map them into the Java world, you need a mapping framework (like Jackson or Gson), and most probably implement your own classes holding the returned information.

If you want to use a single application, with a more object oriented way, you can consider using product specific API.s like JiraRestClient from the com.atlassian.jira.rest.client.api package.  

Configuring Application link

In order to use functionalities of JIRA from my Confluence plug-in, I have created an Application link between the two servers. In local development environment I had to configure the link to use the "Trusted application" authentication type, in order to get it worked.

I made the following steps to create the application link

  1. Start the JIRA and the Confluence server locally, using the atlassian-run command in the root directory of the plug-ins.
  2. Log in as admin to both applications
  3. In JIRA navigate to Administration/Applications/Application links
  4. Create a new application link following the wizard. The process includes to go to the Confluence administration page and set the inverse link as well.
  5. After the application links are visible on both of the servers, set the authentication for incoming and outgoing link to "OAuth (impersonation)" everywhere.
    Doing so, you configures trusted application connection, and you do not need to bother with authentication in the plug-in code.

Implementing usage of Application link

I have implemented an example to show, how to get a list of all JIRA fields in Confluence using the application link to JIRA.

The call via application link uses the REST API URL providing field information: http://javadeveloper:2990/jira/rest/api/2/field

The result from JIRA looks like this:


[
{"id":"issuetype","name":"Issue Type","custom":false,"orderable":true,
 "navigable":true,"searchable":true,"clauseNames":["issuetype","type"],
 "schema":{"type":"issuetype","system":"issuetype"}},
{"id":"components","name":"Component/s","custom":false,"orderable":true,
  "navigable":true,"searchable":true,"clauseNames":["component"],
  "schema":{"type":"array","items":"component","system":"components"}}, ...

At the first step, I create an ApplicationLinkRequest pointing to the given URL. 


final static int APP_LINK_TIME_OUT = 60000;

// Logger instance
private static final Logger log = LoggerFactory.getLogger(JiraServiceCaller.class);

@Autowired
private ApplicationLinkService appLinkService;

/**
 * Creates ApplicationLinkRequest for calling JIRA REST service, based on the restServiceUrl parameter.
 * 
 * @param restServiceUrl
 * @return
 */
protected ApplicationLinkRequest createApplicationLinkRequest(String restServiceUrl) {
 MethodType methodType = Request.MethodType.POST;
 ApplicationLinkRequest aplrq = createApplicationLinkRequest(restServiceUrl, methodType);
 return aplrq;
}

private ApplicationLinkRequest createApplicationLinkRequest(String restServiceUrl, MethodType methodType) {
 ApplicationLink appLink = appLinkService.getPrimaryApplicationLink(JiraApplicationType.class);
 if (appLink == null) {
  log.info("Failed to handle REST request. CredentialsRequiredException occured.");
  throw new JiraConnectionException("Unable to get application link of type 'JiraApplication'");
 }

 ApplicationLinkRequestFactory factory = appLink.createAuthenticatedRequestFactory();
  ApplicationLinkRequest aplrq = null;
 try {
  aplrq = factory.createRequest(methodType, appLink.getRpcUrl() + restServiceUrl);
  aplrq.setSoTimeout(APP_LINK_TIME_OUT);

 } catch (CredentialsRequiredException e) {
  log.warn("Error while creating ApplicationLinkRequest", e);
  throw new JiraConnectionException("Unable to connect JIRA via application link. Error message: '" + e.getMessage() + "'");
 }
 return aplrq;
}


Than I execute the configured ApplicationLinkRequest to get the result of the REST API call.


private static final String REST_JIRA_RETRIEVE_FIELDS = "/rest/api/2/field";
 
/**
 * Retrieves a list containing name of all existing JIRA fields
 */
@Override
public List<String> retrieveFieldNames() {
 List<String> result = new ArrayList<String>();

 try {
  ApplicationLinkRequest alr = createApplicationLinkGetRequest(REST_JIRA_RETRIEVE_FIELDS);
  String jiraResponse = alr.execute();
  if (StringUtils.isNotBlank(jiraResponse)) {
   ObjectMapper mapper = new ObjectMapper();
   JsonJiraField[] myObjects = mapper.readValue(jiraResponse, JsonJiraField[].class);
   for (int i = 0; i < myObjects.length; i++) {
    JsonJiraField jsonJiraField = myObjects[i];
    result.add(jsonJiraField.getName());
   }
  }
 } catch (IOException | ResponseException e) {
  log.warn("Failed to handle REST request. IOException occured.", e.getMessage());
 }
 return result;
}


For mapping between the JSON sting and the Java code, I implemented a simple value object, holding the JIRA field info, and used the ObjectMapper from Jackson to fill it up with data.     

Add parameters to the REST call

The ApplicationLinkRequest makes it possible to add parameters to your REST call as key - value pairs, like this:



alr.addRequestParameters("projectKey", projectKey);






2016. szeptember 27., kedd

Different ways to use assertions in Java code



In order to create a clean and save code, I prefer implement as many state checks in my code, as possible or necessary.

If a state check fails, you usually want to throw an exception and stop processing the current business step. Failure in state check is always a sign of an implementation problem. As this kind of problems should be found in your test phase, your end user must not face with this kind of exceptions.

You should consider using assertion in the following situations:
  • Parameter check of the non private methods used by your own components.  
  • As it is stated in the many programming design guides, you always need to check parameters of a public method. It helps to provide a clean API and makes your code safer.
  • Check the state of your object before start to execute a critical operation.
It is also an advantage of assertions, that they can be considered as "active comments" in the code. Using assertions with appropriate error message documents the code, and has a well defined functionality at the same time. Assertions also help to test the API contract by writing Unit test direct against them.

You should carefully consider using assertions in following situations:
  • Checking parameters of private methods. In most cases unnecessary, as the caller of the method can be controlled by you.
  • Check returned objects from other software components, treating them as they would be an input to your component. As the connection and returned objects can be dependent of the environment, it does not necessary indicates a programming error if they are not in an appropriate state. In such situations throw a business exception instead. 

You have following ways to implement assertions

Simple Java check

Implementing the state check with a normal Java code, and throw the appropriate exception, when it fails.

Advantages
  1. You are not bound to a specific framework. 
  2. You can throw any kind of exception, depending of the situation. 
  3. You can implement your own Exception (preferably as a subclass of RuntimeEcxeption), and use it for validations. You can implement the appropriate exception handling for validation problems, make it possible for example to log them separately. You can even inform the developers about problems in production environment.

Disadvantages
  1. You are repeating yourself in the code. 
  2. In a team, all members will implement their own validation, with different, ad hoc Exceptions, and they will tend to forget adding meaningful messages to the exceptions.
  3. You do not have a common way to handle validation failures, as any kind of exception can be thrown. 
  4. Therefore you will soon start to implement your own little framework for validating the state, which is the root of the evil :) 

Java assertion



 public void setRefreshInterval(int interval) {
    // validate input parameter
    assert interval > 0 && interval <= 1000/MAX_REFRESH_RATE : interval;


Advantages
  1. The check is done only in test or development environment, which makes the production code slightly faster.
  2. AssertionError must not be cought, also it does not require additional code to handle it.
Disadvantages
  1. The check is done only in test or development environment, which makes problems, that not found during the acceptance very dangerous. Casing exceptions (if you are lucky) or incorrect business operations (if you are not so lucky)
  2. It is not always obvious to run unit tests with assertions enabled. However in case of Maven's Surefire plugin, assertions are swithced on by default, you need to check this when using other build tools.
    Starting Unit test with assertions enabled from a development tool, like Eclipse is also not convenient.  

Guava Preconditions class

Using the Preconditions class from Guava framework makes it easy to define simple validation rules. It has however not too much methods to be used.

Advantages
  1. Single utility class to validate conditions
Disadvantages
  1. A small set of validation methods

Commons Validate class

Using the Validate class makes it easy to implement validation rules. Its convenient methods are easy to use, and have a very easy to read naming convention.

Advantages
  1. Works with older Java versions as well.
  2. Lots of methods to check null values, collections, ranges, special checks for Strings.
  3. Continuously growing set of methods
Disadvantages
  1. Validation can not be restricted to test evironment
  2. There is no possibility to inform the developer, when problem occures in production.
Sources
http://www.cnblogs.com/rollenholt/p/3655511.html
http://stackoverflow.com/questions/5172948/should-we-always-check-each-parameter-of-method-in-java-for-null-in-the-first-li























2016. szeptember 21., szerda

Figure out if a field has been changed in JIRA event handler




At the previous post I have created an event handler for JIRA issues. Now, I will show how to check if a particular field has been changed during the event to be handled.

JIRA uses GenericValue objects from org.ofbiz.core.entity package to store event related information. GenericValue is basically a Map implementation to store all kind of data. So the key for finding the information we need is to know, which map keys we need to use to gain the data.

I have implemented a small utility class to find relevant elements in the issue event, and answer if a given field has been changed.


import java.util.List;

import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.ofbiz.core.entity.GenericEntityException;
import org.ofbiz.core.entity.GenericValue;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.atlassian.jira.event.issue.IssueEvent;
import com.atlassian.jira.issue.fields.CustomField;
import com.google.common.collect.ImmutableMap;

/**
 * Wrapper class to gain custom field specific information from the JIRA issue event
 * 
 * @author Peter Varga
 *
 */
class IssueEventWrapper {
 // keys to be searched in GenericValue objects
 private static final String KEY_CHANGE_ITEM = "ChangeItem";
 private static final String KEY_ID = "id";
 private static final String KEY_GROUP = "group";
 private static final String KEY_FIELD = "field";

 // Logger instance
 private static final Logger log = LoggerFactory.getLogger(IssueEventWrapper.class);

 private final IssueEvent issueEvent;

 /**
  * @param issueEvent
  */
 public IssueEventWrapper(IssueEvent issueEvent) {
  super();
  this.issueEvent = issueEvent;
 }

 /**
  * Answers if value of the given field has been changed during the issue event
  * 
  * @param customField
  * @return true if value of the given field has been changed
  */
 boolean fieldValueHasChanged(CustomField customField) {
  try {
   GenericValue changeLog = issueEvent.getChangeLog();
   if (changeLog == null) {
    return false;
   }

   List<GenericValue> changeItems = findChangeItems(changeLog);
   if (CollectionUtils.isEmpty(changeItems)) {
    return false;
   }

   for (GenericValue changedItem : changeItems) {
    // name of the field changed
    String field = changedItem.getString(KEY_FIELD);
    if (StringUtils.equals(field, customField.getFieldName())) {
     return true;
    }
   }
  } catch (GenericEntityException ex) {
   log.error(ex.getMessage(), ex);
  }
  return false;
 }

 /*
  * Returns list of change items, containing a single change information for each changed fields of the issue.
  */
 private List<GenericValue> findChangeItems(GenericValue changeLog) throws GenericEntityException {
  Object id = changeLog.get(KEY_ID);
  ImmutableMap<String, Object> map = new ImmutableMap.Builder<String, Object>().put(KEY_GROUP, id).build();
  return changeLog.internalDelegator.findByAnd(KEY_CHANGE_ITEM, map);
 }

}