2019. március 5., kedd

Batching database operations with Spring and Hibernate

Recently I needed to do some database performance optimization for a Spring Boot application. The application persisted a huge amount of log like data to the Oracle database via Spring JPA. With Hibernate and Spring Data JPA it was not really deficient to write this data, due to the time of single datasource, transaction and commit operations.

When writing a lot of, not modifiable data, you should consider batch operations, meaning, that a bunch of data gets written into the database as a single operation.

Batch mode with Hibernate

It is possible to set batch size to hibernate, using the hibernate.jdbc.batch_size property. In case of you persist a bunch of elements, they will be temporary stored on the 1st level cache. The property defines to Hibernate the maximum number of elements stored in the cache, before flushing them out into the database. It is useful to avoid OutOfMemoryException, caused by storing too many elements in the cache. In this case the entities will be written using the JDBC Statement's addBatch() and executeBatch() methods.

It is though not the behavior I really needed.  According to the Hibernate documentation, the batch size property should be set between 10 and 50. As my application persisted way more data in a short period of time, I wanted to be able to define a higher value, and let it change dynamically.

The above mentioned Hibernate batching works only if you persist multiple elements in the same transaction, without implicit or explicit commit and flush. In case of you use single method calls, calling in different transactions, the single write operations will be executed.

It also worth mentioning, that the hibernate.jdbc.batch_size property is used general for the Hibernate instance. Also it has an impact on the whole application, what I wanted to avoid.

Solution

As it is not possible to get Spring Data JPA and Hibernate to work in batched mode in the way I needed, I had to reach a level deeper. I implemented the batching with JDBC calls like this:

I created a service that is responsible to execute database operations in batch. In the example I show batched insert operation, but obviously it works the same way for update or delete.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Service
public class InsertActiveChannelBatchDao {

 private final static String INSERT_STATEMENT = "insert into ACTIVE_CHANNEL (id, client_id, device_id, channel_id, play_type, start_time, created, end_time, status)"
   + "values "
   + "(?, ?, ?, ?, ?, ?, ?, ?, 0)";

 @Autowired
 private DataSource dataSource;

 public void insertAll(List<ActiveChannel> activeChannels) {
  if (activeChannels.isEmpty()) {
   return;
  }
  log.info("Batch insert ActiveChannel entities: {}", activeChannels.size());
  Connection connection = null;

  try {
   connection = dataSource.getConnection();
   PreparedStatement insertStatement = connection.prepareStatement(INSERT_STATEMENT);

   for (ActiveChannel activeChannel : activeChannels) {
    setInsertStatementParameters(activeChannel, insertStatement);
    insertStatement.addBatch();
   }

   long start = System.currentTimeMillis();
   insertStatement.executeBatch();
   log.info("Batch insert ActiveChannel entities finished: {} in {} ms", activeChannels.size(),
     System.currentTimeMillis() - start);

  } catch (SQLException e) {
   log.warn("Exception occurred while inserting records: " + e.getMessage());
  } finally {
   DataSourceUtils.releaseConnection(connection, dataSource);
  }
 }


The service gets the database connection from the Hibernate datasource. It creates a PreparedStatement, and executes inserts as batch. At the end of the operation, the connection must be released back to the underlying pool. Please note, that it is not needed to commit the changes, because for the connection, got from the datasource, autocommit is true. 

The ActiveChannelBatchService service acts as controller for the batched database operations. It stores the elements to be persisted in a queue, and writes them to database periodically.



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
@Service
public class ActiveChannelBatchService {

 private static final Logger logger = LoggerFactory.getLogger(ActiveChannelBatchService.class);

 private final InsertActiveChannelBatchDao insertActiveChannelBatchDao;

 private boolean batchingEnabled;

 private int batchSize;

 private final ConcurrentLinkedQueue<ActiveChannel> entitiesToBePersisted = new ConcurrentLinkedQueue<>();

 @Autowired
 public ActiveChannelBatchService(InsertActiveChannelBatchDao insertActiveChannelBatchDao,
   @Value("${spring.datasource.batching.enabled}") boolean batchingEnabled,
   @Value("${spring.datasource.batching.batchSize}") int batchSize) {

  this.insertActiveChannelBatchDao = insertActiveChannelBatchDao;
  this.batchingEnabled = batchingEnabled;
  this.batchSize = batchSize;
 }

 @Async
 public void createNew(ActiveChannel activeChannel) {
  logger.info("Entering createNew(). Parameter: {} ", activeChannel);
  if (batchingEnabled) {
   entitiesToBePersisted.add(activeChannel);
  } else {
   insertActiveChannelBatchDao.insert(activeChannel);
  }
 }

 @Scheduled(fixedDelayString = "${spring.datasource.batching.commitIntervalInMs}")
 public void doDatabaseChanges() {
  if (!batchingEnabled) {
   return;
  }

  List<ActiveChannel> activeChannelsToBePersisted = retrieveActiveChannelsToBePersisted();
  insertActiveChannelBatchDao.insertAll(activeChannelsToBePersisted);
 }
 
 @PreDestroy
 public void doDbChangesBeforeShutDown() {
  logger.info("Writing all cahges into Database before shut down");
  doDatabaseChanges();
 }

 private <T extends Object> List<T> poll(ConcurrentLinkedQueue<T> queue) {
  List<T> result = new ArrayList<>(batchSize);
  for (int i = 0; i < batchSize; i++) {
   T activeChannel = queue.poll();
   if (activeChannel == null) {
    return result;
   }
   result.add(activeChannel);
  }
  return result;
 }

 List<ActiveChannel> retrieveActiveChannelsToBePersisted() {
  return poll(entitiesToBePersisted);
 }

}

As you can see, it is possible to configure following properties:

  • how often should the service perform the batch operation
  • how many elements should be written into the database at once
  • if batching enabled at all.

In order not to loose entities while the service gets shut down, the method  doDbChangesBeforeShutDown() was marked with @PreDestroy annotation.

Resources





2019. február 28., csütörtök

Custom exception by Websocket Connection


Recently I needed to implement a custom check for WebSocket connection of a Spring Boot application. The logic had to check if a configured connection limit has been reached, and reject opening further connections if so.

The connections were opened by a mobile application client, and the developer wanted to get the error message in a JSON form, in order to parse it easily to an error object.

The Spring framework makes it possible to define interceptors for the Websocket communication. In my case the interceptor looked like this:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
final class NumberOfConnectedClientsChannelInterceptor implements ChannelInterceptor {
 private static final Logger log = LoggerFactory.getLogger(NumberOfConnectedClientsChannelInterceptor.class);

 private final SocketServiceConfiguration socketServiceConfiguration;

 private final LifeSocketConnectionRepository lifeSocketConnectionDao;

 public NumberOfConnectedClientsChannelInterceptor(SocketServiceConfiguration socketServiceConfiguration,
   LifeSocketConnectionRepository lifeSocketConnectionDao) {

  this.socketServiceConfiguration = socketServiceConfiguration;
  this.lifeSocketConnectionDao = lifeSocketConnectionDao;
 }

 @Override
 public Message<?> preSend(Message<?> message, MessageChannel channel) {
  StompHeaderAccessor accessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor.class);
  checkConnectionLimit(accessor);

  return message;
 }

 private void checkConnectionLimit(StompHeaderAccessor accessor) {
  if (StompCommand.CONNECT.equals(accessor.getCommand())) {

   long alreadyConnectedClients = lifeSocketConnectionDao.count();

   log.info("Checking connection limit. Connection limit: {} Number of connections: {} ",
     socketServiceConfiguration.getMaxNumberOfConnections(), alreadyConnectedClients);

   if (alreadyConnectedClients >= socketServiceConfiguration.getMaxNumberOfConnections()) {
    log.warn(
      "Too many connected clients. Connecion refused. Connection limit: {} Number of connections: {}",
      socketServiceConfiguration.getMaxNumberOfConnections(), alreadyConnectedClients);
    throw WebsocketConnectionLimitException.ofConnectionLimitError();
   }
  }
 }
}

The interceptor need to be registered in the Websocket configuration class

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {

...
 @Override
 public void configureClientInboundChannel(ChannelRegistration registration) {
  registration.interceptors(
   new NumberOfConnectedClientsChannelInterceptor(socketServiceConfiguration, lifeSocketConnectionDao));
 }

}

As you can see, the interceptor can only throw an exception to indicate the problem. The solution is a custom exception, extending MessagingException.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
import org.springframework.messaging.MessagingException;

public class WebsocketConnectionLimitException extends MessagingException {

 private static final String CONNECTION_LIMIT_MESSAGE = "{\"errorCode\": 403, \"errorMessage\":\"Too many connected clients\"}";
 
 public WebsocketConnectionLimitException(String message) {
  super(CONNECTION_LIMIT_MESSAGE);
 }
}

In case of any other exception type, the Spring framework returns the String representation of the exception, instead of the message itself.

According to the Stomp specification, the client must be able to escape some special characters.
  • \r (octet 92 and 114) translates to carriage return (octet 13)
  • \n (octet 92 and 110) translates to line feed (octet 10)
  • \c (octet 92 and 99) translates to : (octet 58)
  • \\ (octet 92 and 92) translates to \ (octet 92)

While in JSON the ":" character is used quite commonly, the client will have problem with deserialization if it can not handle the escaping out of box.

For more details see http://stomp.github.io/stomp-specification-1.2.html#Value_Encoding


2018. július 5., csütörtök

Setting up MDC context with AOP in Spring Boot application


In order to be able to analyse logs of complex processes for your application, it is useful to use Mapped Diagnostic Context (MDC) and put some process specific information into it. Currently I am implementing a Spring Boot application, with several services, and lots of calls between them. As there is a huge number of users parallel, when it comes to analyse a production log, it is not easy to find the subsequent process calls, belonging to a REST call in the log file.

In order to make my life easier, there is still a unique session id in the log, using MDC, to find all the calls belonging to a specific user and call. But to make the analysis easier, I decided to add function specific information to the logs. So, if a REST method gets called, it can set the logical function name in the MDC. Therefore all later call to helper or DAO classes can be logged with the same function name, and can be found so easily.

Obviously, setting the MDC value could be done at the beginning of in every REST method as well, like this:


1
2
3
4
5
6
@RequestMapping(value = "/logic/delete", method = { RequestMethod.POST })
public ResponseEntity<String> delete(HttpServletRequest httpServletRequest, @Valid @RequestBody DeleteLogicRequest request) {
    MDC.put(BACKEND_FUNCTION_NAME, "[" + functionName + "]");
  
    // do the subsequent service calls...

Fortunately using Sping AOP possibilities, it is easy to implement this cross cutting functionality in a more sophisticated way.

First, I defined an annotation, to mark services, where function information needs to be set into the MDC context, before calling any of its public methods.



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
 * Used to mark classes in which before all method call the function information should be set into the MDC context. All logs of subsequent method
 * calls on any bean will contain the MDC information.
 */
@Target({ ElementType.TYPE })
@Retention(RetentionPolicy.RUNTIME)
public @interface ClassWithMdcContext {
 String functionName() default "";
}


I also defined another annotation, to make it possible to mark single methods. The function name information will be set by the Spring framework into MDC before calling the marked methods.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
 * Used to mark methods in which before calling it, the function information should be set into the MDC context. All logs of subsequent method calls
 * on any bean will contain the MDC information.
 */
@Target({ ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
public @interface MethodWithMdcContext {
 String functionName() default "";
}


The AOP magic happens in the following class.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
import java.lang.reflect.Method;

import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.slf4j.MDC;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Service;

// Marks the AOP functionality as the first one. Needed while logging happens also with AOP, and you make sure, that MDC information is set before any
// other AOP activities.
@Order(1)
@Service
@Aspect
public class MdcContextInitialiser {
 private static final String BACKEND_FUNCTION_NAME = "backendFunctionName";

 @Around("methodsAnnoatatedWithMethodWithMdcContext()")
 public Object aroundAnnotatedMethods(ProceedingJoinPoint joinPoint) throws Throwable {
  setMdcContextForMethod(joinPoint);
  return joinPoint.proceed();
 }

 @Around("classesAnnotatedWithClassWithMdcContext()")
 public Object aroundAnnotatedClass(ProceedingJoinPoint joinPoint) throws Throwable {
  setMdcContextForClass(joinPoint);
  return joinPoint.proceed();
 }

 @Pointcut(value = "@annotation(MethodWithMdcContext)")
 public void methodsAnnoatatedWithMethodWithMdcContext() {
  // defines pointcut for methods annotated with MethodWithMdcContext
 }

 @Pointcut("@within(ClassWithMdcContext)") // this should work for the annotation service pointcut
 private void classesAnnotatedWithClassWithMdcContext() {
  // defines pointcut for classes annotated with ClassWithMdcContext
 }

 private void setMdcContextForMethod(ProceedingJoinPoint joinPoint) {
  MethodSignature signature = (MethodSignature) joinPoint.getSignature();
  Method method = signature.getMethod();
  MethodWithMdcContext annotation = method.getAnnotation(MethodWithMdcContext.class);

  String functionName = annotation.functionName();

  if (StringUtils.isBlank(functionName)) {
   functionName = getClassName(signature.getDeclaringTypeName()) + "_" + method.getName();
  }

  MDC.put(BACKEND_FUNCTION_NAME, "[" + functionName + "]");
 }

 @SuppressWarnings({ "unchecked", "rawtypes" })
 private void setMdcContextForClass(ProceedingJoinPoint joinPoint) {
  MethodSignature signature = (MethodSignature) joinPoint.getSignature();
  Class clazz = signature.getDeclaringType();
  ClassWithMdcContext annotation = (ClassWithMdcContext) clazz.getAnnotation(ClassWithMdcContext.class);

  String functionName = annotation.functionName();

  if (StringUtils.isBlank(functionName)) {
   functionName = getClassName(signature.getDeclaringTypeName()) + "_" + signature.getMethod().getName();
  }

  MDC.put(BACKEND_FUNCTION_NAME, "[" + functionName + "]");
 }

 private String getClassName(String classFullName) {
  int startIndexOfClassName = StringUtils.lastIndexOf(classFullName, ".") + 1;
  return StringUtils.substring(classFullName, startIndexOfClassName);
 }

}


The MdcContextInitialiser class initializes the MDC context, corresponding to the annotations above. I defined a pointcut for both of the annotations, and an around method, in order to call the MDC initialization before calling the actual method.

The functionName property in both annotations allows to define any logical function name for the log, like "StartupCall", "AddToCart" or "Purchase". In case of it is not defined, a combination of the class and method name will be used.

In order to get the MDC data printed in the log, you need to add the key "backendFunctionName" to the log pattern in the logback.xml.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
 <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  <level>INFO</level>
 </filter>
 <encoder>
  <Pattern>%d %-5level %X{genericID} %X{backendFunctionName} %C{0} %L : %msg%n</Pattern>
 </encoder>
</appender>


After marking some classes or methods with the corresponding annotation, your log will contain the function name. Say calling the LogicRestService.getAll() method, the log looks like this:



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
     [exec] 2018-07-05 16:00:42,704 INFO  [LogicRestService_getAll] AuthenticationFilter 56 : Processing URL: https://localhost/rest/logic/all
     [exec] 2018-07-05 16:00:42,706 INFO  [LogicRestService_getAll] AuthenticationFilter 183 : URL needs to be processed: https://localhost/rest/logic/all for type crbheader
     [exec] 2018-07-05 16:00:42,706 INFO  [LogicRestService_getAll] AuthenticationFilter 183 : URL needs to be processed: https://localhost/rest/logic/all for type ManualLogin
     [exec] 2018-07-05 16:00:42,706 INFO  [LogicRestService_getAll] AuthenticationFilter 118 : URL https://localhost/rest/logic/all requires manual login: true
     [exec] 2018-07-05 16:00:42,706 INFO  [LogicRestService_getAll] AuthenticationFilter 125 : User logged in with manual login: true
     [exec] 2018-07-05 16:00:42,707 INFO  [LogicRestService_getAll] ExecutionTimeMonitor 119 : Entering GatewayDao.findGatewayIdBySubscriberId(String) parameters: [51125096]
     [exec] 2018-07-05 16:00:42,710 INFO  [LogicRestService_getAll] ExecutionTimeMonitor 124 : Exiting GatewayDao.findGatewayIdBySubscriberId(String) : 3 ms result : Optional.empty
     [exec] 2018-07-05 16:00:42,714 INFO  [LogicRestService_getAll] ExecutionTimeMonitor 119 : Entering LogicRestService.getAll(HttpServletRequest) parameters: [org.springframework.web.util.ContentCachingRequestWrapper@72689384]
     [exec] 2018-07-05 16:00:42,716 INFO  [LogicRestService_getAll] ExecutionTimeMonitor 119 : Entering LogicDao.findByGatewaySubscriberIdAndWorkingState(String) parameters: [53474139]
     [exec] 2018-07-05 16:00:42,718 INFO  [LogicRestService_getAll] ExecutionTimeMonitor 124 : Exiting LogicDao.findByGatewaySubscriberIdAndWorkingState(String) : 2 ms result : []
     [exec] 2018-07-05 16:00:42,718 INFO  [LogicRestService_getAll] ExecutionTimeMonitor 124 : Exiting LogicRestService.getAll(HttpServletRequest) : 4 ms result : <200 OK,[],{}>

As you can see, all further log entry contains the function information. In this way it ts quite easy to identify all the logs belonging to a given REST service, even if they are separated by entries from other processes or sessions.

2018. február 12., hétfő

Checking parameter of a mocked method call with Mockito


If you are using Mockito, it is sometimes useful to check the parameter state of a method passed to a mocked method call. I use it in the following cases

  • DAO method, like create or update is called in the code. I  want to check, if the entity has been set correctly before writing into the database, but I do not want to use integration test with memory database for it. (With memory database I would be able to read the entity after writing it out, and check the new state)
  • I want to collect the parameters of subsequent calls into a list, in order to analyse them later on. 

To achieve the required functionality, you can implement a custom Answer object for the method invocation and pass it to the then() method.



import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.junit.MockitoJUnitRunner;
import org.mockito.stubbing.Answer;

...

@Mock
private GatewayDeviceXmlVersionDao gatewayDeviceXmlVersionDao;

@InjectMocks
private ReadDeviceXmlFileVersionResponseProcessor processor;

...
@Test
public void testProcess_storesUnknownVersionWhenFailed() throws Exception {
 when(gatewayDeviceXmlVersionDao.findByGatewayId(Mockito.eq(GATEWAY_ID))).thenReturn(null);

 Answer<?> answer = new Answer<Object>() {
  @Override
  public Object answer(InvocationOnMock invocation) throws Throwable {
   GatewayDeviceXmlVersion versionSaved = invocation.getArgument(0);
   assertThat(versionSaved.getCurrentDeviceXmlVersion(), equalTo(ReadDeviceXmlFileVersionResponse.UNKNOWN_VERSION));
   assertThat(versionSaved.getUpdateStatus(), equalTo(UpdateProcessStatus.OK));
   return null;
  }
 };
 // checking state of version saved into the database
 when(gatewayDeviceXmlVersionDao.save(Mockito.any(GatewayDeviceXmlVersion.class))).then(answer);

 processor.process();
}


As you can see, you can get the parameter from the invocation object, ans do whatever you want with it.

With this solution, you can avoid to create a a custom mocked implementation. You can use the mocked object as before. You can call other methods on it, using Mockito other functions. 


2017. november 21., kedd

Automatic dependency update detection with Gradle

Gradle has the possibility to check if the libraries used by your project has a new version. In order to use the feature, you need to add following elements to your Gradle configuration:


buildscript {
   dependencies {
        classpath 'com.github.ben-manes:gradle-versions-plugin:0.17.0'
    }
}
 
apply plugin: 'com.github.ben-manes.versions'

After configuring the project, you can start the following command to get the update info:

gradlew dependencyUpdates -Drevision=release

It lists the dependencies and possible newer versions.

What I learned 

  • Be careful, with the result of dependencyUpdates, while it is not hundred percent accurate.
  • It also can happen, that there are indirect dependencies between your libraries. In case of you need to get detailed information about the dependency hierarchy, you can use Gradle's dependency tree command.

    gradlew dependencies

    Unfortunately at the time of writing this post, there is no Eclipse support for the hierarchy tree.
  • Sometimes it is not even possible to use the latest version of a library. For example in case of Powermock, you need to check the compatibility page first
    https://github.com/powermock/powermock/wiki/Mockito
  • Due to indirect dependencies, it is not always possible to change to the latest version of a library directly. Simple defining the latest version can cause problems in Eclipse build or in the Gradle build process.

    In order to solve the problem, you need to exclude previous version of a given library used by another one. Excluding in Gradle looks like this:

    compile('org.springframework.boot:spring-boot-starter-test') {
        exclude group: 'org.mockito', module: 'mockito-core'
    }


2017. október 24., kedd

Implementing inheritance for JSON objects


Our App developers wanted to send slightly different JSON elements in a single list, in order to make the App side implementation far less complicated. In order to make it possible to, I decided to implement inheritance hierarchy for the JSON input and the related java DTO classes.

The other possibility would be to have a JSON object with union of the fields from all Java classes and using @JsonIgnoreProperties(ignoreUnknown = true) for them. In this way you would be able to parse only the relevant field into the given Java class.

Advantages of using hierarchy in JSON related classes:

  • more object oriented implementation
  • you can declare different type of objects in a single JSON list, so far they have the same Java super class
  • automatic REST input validation of the Spring framework still works for the classes. Yo do not need to define checking logic with mixed fields and complicated conditions. 
  • easy to add new types in the future, without effecting the existing ones 

Disadvantages:
  • you need to consider if storing different elements in a single list is a god idea at all
  • complicated structure
  • error prone type definition. It is not possible to define enumeration or any other typed value for the name property of @Type (see below)
  • unit testing of the hierarchy is more complicated
  • difficult to change break the hierarchy structure in case modification is needed later on

I created a common super class for the root of the inheritance and defined the field "actionType" for discriminator.  


@Data
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "actionType")
@JsonSubTypes({
 @Type(value = TriggeredNotificationActionDto.class, name = "NOTIFICATION"),
 @Type(value = TriggeredDeviceActionDto.class, name = "DEVICE")
})
public static class TriggeredActionDto implements Serializable {
 private String actionType;
}

I defined the actual sub-classes (using annotation of Lombok project for getter and setter generation). I also defined validators for the fields. They are used by the Spring framework when the objects are acting as input parameter of a REST call.

@Data
@EqualsAndHashCode(callSuper = true)
public static class TriggeredNotificationActionDto extends TriggeredActionDto {
 public enum NotificationActionType {
  WEBSOCKET, PUSH;
 }

 @JsonProperty("notificationType")
 @NotNull(message = "notificationType must not be blank!")
 private NotificationActionType notificationType;
}



@Data
@EqualsAndHashCode(callSuper = true)
public static class TriggeredDeviceActionDto extends TriggeredActionDto {
 @ApiModelProperty(example = "swagger001")
 @JsonProperty("deviceId")
 @NotNull(message = "Device id must not be blank!")
 private String deviceId;

 @ApiModelProperty(example = "1")
 @JsonProperty("channelId")
 @NotNull(message = "Channel id must not be blank!")
 @Range(min = 1, message = "Channel id must be a positive number")
 private int channelId;

 @ApiModelProperty(example = "1")
 @JsonProperty("value")
 @NotNull(message = "Value must not be blank!")
 private int value;

...
}


In the class, containing the TriggeredActionDto elements, I marked the list as @Valid, in order to force validation of the each elements of the list.


@JsonProperty("tasks")
@Valid
private List<TriggeredActionDto> tasks;



Tipp and tricks using Spring Data



Spring Data is a very comfortable way of defining DAO objects for your project. It can generate a DAO for your entity class, and provide all basic CRUD functions out of the box. You only need to define an interface, inherited from CrudRepository.

For more information of Spring Data basics I recommend reading following resources
  • http://projects.spring.io/spring-data
  • https://www.petrikainulainen.net/spring-data-jpa-tutorial/
It is however not always enough to use the basic functionality of Spring Data. I collect here some more interesting examples of using the possibilities of the framework.

Searching for String with prefix using Query


@Transactional
@Repository
public interface PushServiceConfigurationDao extends CrudRepository<PushSecrviceConfiguration, String> {

@Query(value = "SELECT c.value FROM PushSecrviceConfiguration c where c.key = UPPER(CONCAT('GOOGLE_GCM_API_KEY_', :appName))")
public String findGmcAuthKeyByAppName(@Param("appName") String appName);
}

Get record count


@Query(value = "SELECT COUNT(l) FROM Logic l where l.enabled = true AND l.gateway.id = :gatewayId")
int numberOfEnabledLogicsByGatewayId(@Param("gatewayId") String gatewayId);

Using enum as search parameter


@Query(value = "SELECT t FROM Logic t where t.gateway.id = :gatewayId and t.logicType = :logicType")
List<Logic> findAllByGatewayIdAndLogicType(@Param("gatewayId") String gatewayId, @Param("logicType") LogicType logicType);

Using @EntityGraph to control eager loading of elements


Defining the entitygraph in your domain class


@NamedEntityGraphs({
 @NamedEntityGraph(name = "graph.gateway.authTokens", attributeNodes = @NamedAttributeNode("authTokens")),
 @NamedEntityGraph(name = "graph.gateway.devices", attributeNodes = @NamedAttributeNode("devices"))
})
public class Gateway implements java.io.Serializable { ...

Using @EntityGraph annotation in your query

@EntityGraph(value = "graph.gateway.authTokens", type = EntityGraphType.LOAD)
Gateway findByDeviceCode(String deviceCode);

Defining native update command


@Modifying
@Query(value = "UPDATE ws_message_queue SET processor_id = ?2 WHERE backend_ip = ?1 AND processor_id is null", nativeQuery = true)
int reserveMessagesForProcessing(String backendIp, String processorId);