A következő címkéjű bejegyzések mutatása: Spring. Összes bejegyzés megjelenítése
A következő címkéjű bejegyzések mutatása: Spring. Összes bejegyzés megjelenítése

2019. március 5., kedd

Batching database operations with Spring and Hibernate

Recently I needed to do some database performance optimization for a Spring Boot application. The application persisted a huge amount of log like data to the Oracle database via Spring JPA. With Hibernate and Spring Data JPA it was not really deficient to write this data, due to the time of single datasource, transaction and commit operations.

When writing a lot of, not modifiable data, you should consider batch operations, meaning, that a bunch of data gets written into the database as a single operation.

Batch mode with Hibernate

It is possible to set batch size to hibernate, using the hibernate.jdbc.batch_size property. In case of you persist a bunch of elements, they will be temporary stored on the 1st level cache. The property defines to Hibernate the maximum number of elements stored in the cache, before flushing them out into the database. It is useful to avoid OutOfMemoryException, caused by storing too many elements in the cache. In this case the entities will be written using the JDBC Statement's addBatch() and executeBatch() methods.

It is though not the behavior I really needed.  According to the Hibernate documentation, the batch size property should be set between 10 and 50. As my application persisted way more data in a short period of time, I wanted to be able to define a higher value, and let it change dynamically.

The above mentioned Hibernate batching works only if you persist multiple elements in the same transaction, without implicit or explicit commit and flush. In case of you use single method calls, calling in different transactions, the single write operations will be executed.

It also worth mentioning, that the hibernate.jdbc.batch_size property is used general for the Hibernate instance. Also it has an impact on the whole application, what I wanted to avoid.

Solution

As it is not possible to get Spring Data JPA and Hibernate to work in batched mode in the way I needed, I had to reach a level deeper. I implemented the batching with JDBC calls like this:

I created a service that is responsible to execute database operations in batch. In the example I show batched insert operation, but obviously it works the same way for update or delete.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Service
public class InsertActiveChannelBatchDao {

 private final static String INSERT_STATEMENT = "insert into ACTIVE_CHANNEL (id, client_id, device_id, channel_id, play_type, start_time, created, end_time, status)"
   + "values "
   + "(?, ?, ?, ?, ?, ?, ?, ?, 0)";

 @Autowired
 private DataSource dataSource;

 public void insertAll(List<ActiveChannel> activeChannels) {
  if (activeChannels.isEmpty()) {
   return;
  }
  log.info("Batch insert ActiveChannel entities: {}", activeChannels.size());
  Connection connection = null;

  try {
   connection = dataSource.getConnection();
   PreparedStatement insertStatement = connection.prepareStatement(INSERT_STATEMENT);

   for (ActiveChannel activeChannel : activeChannels) {
    setInsertStatementParameters(activeChannel, insertStatement);
    insertStatement.addBatch();
   }

   long start = System.currentTimeMillis();
   insertStatement.executeBatch();
   log.info("Batch insert ActiveChannel entities finished: {} in {} ms", activeChannels.size(),
     System.currentTimeMillis() - start);

  } catch (SQLException e) {
   log.warn("Exception occurred while inserting records: " + e.getMessage());
  } finally {
   DataSourceUtils.releaseConnection(connection, dataSource);
  }
 }


The service gets the database connection from the Hibernate datasource. It creates a PreparedStatement, and executes inserts as batch. At the end of the operation, the connection must be released back to the underlying pool. Please note, that it is not needed to commit the changes, because for the connection, got from the datasource, autocommit is true. 

The ActiveChannelBatchService service acts as controller for the batched database operations. It stores the elements to be persisted in a queue, and writes them to database periodically.



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
@Service
public class ActiveChannelBatchService {

 private static final Logger logger = LoggerFactory.getLogger(ActiveChannelBatchService.class);

 private final InsertActiveChannelBatchDao insertActiveChannelBatchDao;

 private boolean batchingEnabled;

 private int batchSize;

 private final ConcurrentLinkedQueue<ActiveChannel> entitiesToBePersisted = new ConcurrentLinkedQueue<>();

 @Autowired
 public ActiveChannelBatchService(InsertActiveChannelBatchDao insertActiveChannelBatchDao,
   @Value("${spring.datasource.batching.enabled}") boolean batchingEnabled,
   @Value("${spring.datasource.batching.batchSize}") int batchSize) {

  this.insertActiveChannelBatchDao = insertActiveChannelBatchDao;
  this.batchingEnabled = batchingEnabled;
  this.batchSize = batchSize;
 }

 @Async
 public void createNew(ActiveChannel activeChannel) {
  logger.info("Entering createNew(). Parameter: {} ", activeChannel);
  if (batchingEnabled) {
   entitiesToBePersisted.add(activeChannel);
  } else {
   insertActiveChannelBatchDao.insert(activeChannel);
  }
 }

 @Scheduled(fixedDelayString = "${spring.datasource.batching.commitIntervalInMs}")
 public void doDatabaseChanges() {
  if (!batchingEnabled) {
   return;
  }

  List<ActiveChannel> activeChannelsToBePersisted = retrieveActiveChannelsToBePersisted();
  insertActiveChannelBatchDao.insertAll(activeChannelsToBePersisted);
 }
 
 @PreDestroy
 public void doDbChangesBeforeShutDown() {
  logger.info("Writing all cahges into Database before shut down");
  doDatabaseChanges();
 }

 private <T extends Object> List<T> poll(ConcurrentLinkedQueue<T> queue) {
  List<T> result = new ArrayList<>(batchSize);
  for (int i = 0; i < batchSize; i++) {
   T activeChannel = queue.poll();
   if (activeChannel == null) {
    return result;
   }
   result.add(activeChannel);
  }
  return result;
 }

 List<ActiveChannel> retrieveActiveChannelsToBePersisted() {
  return poll(entitiesToBePersisted);
 }

}

As you can see, it is possible to configure following properties:

  • how often should the service perform the batch operation
  • how many elements should be written into the database at once
  • if batching enabled at all.

In order not to loose entities while the service gets shut down, the method  doDbChangesBeforeShutDown() was marked with @PreDestroy annotation.

Resources





2019. február 28., csütörtök

Custom exception by Websocket Connection


Recently I needed to implement a custom check for WebSocket connection of a Spring Boot application. The logic had to check if a configured connection limit has been reached, and reject opening further connections if so.

The connections were opened by a mobile application client, and the developer wanted to get the error message in a JSON form, in order to parse it easily to an error object.

The Spring framework makes it possible to define interceptors for the Websocket communication. In my case the interceptor looked like this:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
final class NumberOfConnectedClientsChannelInterceptor implements ChannelInterceptor {
 private static final Logger log = LoggerFactory.getLogger(NumberOfConnectedClientsChannelInterceptor.class);

 private final SocketServiceConfiguration socketServiceConfiguration;

 private final LifeSocketConnectionRepository lifeSocketConnectionDao;

 public NumberOfConnectedClientsChannelInterceptor(SocketServiceConfiguration socketServiceConfiguration,
   LifeSocketConnectionRepository lifeSocketConnectionDao) {

  this.socketServiceConfiguration = socketServiceConfiguration;
  this.lifeSocketConnectionDao = lifeSocketConnectionDao;
 }

 @Override
 public Message<?> preSend(Message<?> message, MessageChannel channel) {
  StompHeaderAccessor accessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor.class);
  checkConnectionLimit(accessor);

  return message;
 }

 private void checkConnectionLimit(StompHeaderAccessor accessor) {
  if (StompCommand.CONNECT.equals(accessor.getCommand())) {

   long alreadyConnectedClients = lifeSocketConnectionDao.count();

   log.info("Checking connection limit. Connection limit: {} Number of connections: {} ",
     socketServiceConfiguration.getMaxNumberOfConnections(), alreadyConnectedClients);

   if (alreadyConnectedClients >= socketServiceConfiguration.getMaxNumberOfConnections()) {
    log.warn(
      "Too many connected clients. Connecion refused. Connection limit: {} Number of connections: {}",
      socketServiceConfiguration.getMaxNumberOfConnections(), alreadyConnectedClients);
    throw WebsocketConnectionLimitException.ofConnectionLimitError();
   }
  }
 }
}

The interceptor need to be registered in the Websocket configuration class

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {

...
 @Override
 public void configureClientInboundChannel(ChannelRegistration registration) {
  registration.interceptors(
   new NumberOfConnectedClientsChannelInterceptor(socketServiceConfiguration, lifeSocketConnectionDao));
 }

}

As you can see, the interceptor can only throw an exception to indicate the problem. The solution is a custom exception, extending MessagingException.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
import org.springframework.messaging.MessagingException;

public class WebsocketConnectionLimitException extends MessagingException {

 private static final String CONNECTION_LIMIT_MESSAGE = "{\"errorCode\": 403, \"errorMessage\":\"Too many connected clients\"}";
 
 public WebsocketConnectionLimitException(String message) {
  super(CONNECTION_LIMIT_MESSAGE);
 }
}

In case of any other exception type, the Spring framework returns the String representation of the exception, instead of the message itself.

According to the Stomp specification, the client must be able to escape some special characters.
  • \r (octet 92 and 114) translates to carriage return (octet 13)
  • \n (octet 92 and 110) translates to line feed (octet 10)
  • \c (octet 92 and 99) translates to : (octet 58)
  • \\ (octet 92 and 92) translates to \ (octet 92)

While in JSON the ":" character is used quite commonly, the client will have problem with deserialization if it can not handle the escaping out of box.

For more details see http://stomp.github.io/stomp-specification-1.2.html#Value_Encoding


2017. október 24., kedd

Implementing inheritance for JSON objects


Our App developers wanted to send slightly different JSON elements in a single list, in order to make the App side implementation far less complicated. In order to make it possible to, I decided to implement inheritance hierarchy for the JSON input and the related java DTO classes.

The other possibility would be to have a JSON object with union of the fields from all Java classes and using @JsonIgnoreProperties(ignoreUnknown = true) for them. In this way you would be able to parse only the relevant field into the given Java class.

Advantages of using hierarchy in JSON related classes:

  • more object oriented implementation
  • you can declare different type of objects in a single JSON list, so far they have the same Java super class
  • automatic REST input validation of the Spring framework still works for the classes. Yo do not need to define checking logic with mixed fields and complicated conditions. 
  • easy to add new types in the future, without effecting the existing ones 

Disadvantages:
  • you need to consider if storing different elements in a single list is a god idea at all
  • complicated structure
  • error prone type definition. It is not possible to define enumeration or any other typed value for the name property of @Type (see below)
  • unit testing of the hierarchy is more complicated
  • difficult to change break the hierarchy structure in case modification is needed later on

I created a common super class for the root of the inheritance and defined the field "actionType" for discriminator.  


@Data
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "actionType")
@JsonSubTypes({
 @Type(value = TriggeredNotificationActionDto.class, name = "NOTIFICATION"),
 @Type(value = TriggeredDeviceActionDto.class, name = "DEVICE")
})
public static class TriggeredActionDto implements Serializable {
 private String actionType;
}

I defined the actual sub-classes (using annotation of Lombok project for getter and setter generation). I also defined validators for the fields. They are used by the Spring framework when the objects are acting as input parameter of a REST call.

@Data
@EqualsAndHashCode(callSuper = true)
public static class TriggeredNotificationActionDto extends TriggeredActionDto {
 public enum NotificationActionType {
  WEBSOCKET, PUSH;
 }

 @JsonProperty("notificationType")
 @NotNull(message = "notificationType must not be blank!")
 private NotificationActionType notificationType;
}



@Data
@EqualsAndHashCode(callSuper = true)
public static class TriggeredDeviceActionDto extends TriggeredActionDto {
 @ApiModelProperty(example = "swagger001")
 @JsonProperty("deviceId")
 @NotNull(message = "Device id must not be blank!")
 private String deviceId;

 @ApiModelProperty(example = "1")
 @JsonProperty("channelId")
 @NotNull(message = "Channel id must not be blank!")
 @Range(min = 1, message = "Channel id must be a positive number")
 private int channelId;

 @ApiModelProperty(example = "1")
 @JsonProperty("value")
 @NotNull(message = "Value must not be blank!")
 private int value;

...
}


In the class, containing the TriggeredActionDto elements, I marked the list as @Valid, in order to force validation of the each elements of the list.


@JsonProperty("tasks")
@Valid
private List<TriggeredActionDto> tasks;



Tipp and tricks using Spring Data



Spring Data is a very comfortable way of defining DAO objects for your project. It can generate a DAO for your entity class, and provide all basic CRUD functions out of the box. You only need to define an interface, inherited from CrudRepository.

For more information of Spring Data basics I recommend reading following resources
  • http://projects.spring.io/spring-data
  • https://www.petrikainulainen.net/spring-data-jpa-tutorial/
It is however not always enough to use the basic functionality of Spring Data. I collect here some more interesting examples of using the possibilities of the framework.

Searching for String with prefix using Query


@Transactional
@Repository
public interface PushServiceConfigurationDao extends CrudRepository<PushSecrviceConfiguration, String> {

@Query(value = "SELECT c.value FROM PushSecrviceConfiguration c where c.key = UPPER(CONCAT('GOOGLE_GCM_API_KEY_', :appName))")
public String findGmcAuthKeyByAppName(@Param("appName") String appName);
}

Get record count


@Query(value = "SELECT COUNT(l) FROM Logic l where l.enabled = true AND l.gateway.id = :gatewayId")
int numberOfEnabledLogicsByGatewayId(@Param("gatewayId") String gatewayId);

Using enum as search parameter


@Query(value = "SELECT t FROM Logic t where t.gateway.id = :gatewayId and t.logicType = :logicType")
List<Logic> findAllByGatewayIdAndLogicType(@Param("gatewayId") String gatewayId, @Param("logicType") LogicType logicType);

Using @EntityGraph to control eager loading of elements


Defining the entitygraph in your domain class


@NamedEntityGraphs({
 @NamedEntityGraph(name = "graph.gateway.authTokens", attributeNodes = @NamedAttributeNode("authTokens")),
 @NamedEntityGraph(name = "graph.gateway.devices", attributeNodes = @NamedAttributeNode("devices"))
})
public class Gateway implements java.io.Serializable { ...

Using @EntityGraph annotation in your query

@EntityGraph(value = "graph.gateway.authTokens", type = EntityGraphType.LOAD)
Gateway findByDeviceCode(String deviceCode);

Defining native update command


@Modifying
@Query(value = "UPDATE ws_message_queue SET processor_id = ?2 WHERE backend_ip = ?1 AND processor_id is null", nativeQuery = true)
int reserveMessagesForProcessing(String backendIp, String processorId);


2017. október 5., csütörtök

Custom validator for REST parameter in Spring Boot


Using Spring boot applications, it is very easy to let the Spring framework to do the validation of the REST input parameter. You only need to add the corresponding validation annotations to the properties of the parameter class, and mark the parameter with @Valid annotation in the method header.


@RequestMapping(value = "/register", method = { RequestMethod.POST })
public RestResponse register(HttpServletRequest httpServletRequest, 
       @Valid @RequestBody PushRegistrationRequest registrationRequest) {



In case of your REST input object is validated with the @Valid annotation, most of the time, it is enough to  use default validations, like @NotNull, @Pattern, @Size, etc. Sometimes though you need to do more complex validations, considering multiple properties. To achieve this, you need to write your own annotation, and a corresponding validation class.

The annotation looks like this:


@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Constraint(validatedBy = { HasCorrectAppIdValidator.class })
public @interface HasCorrectAppId {
 String message() default "App id is incorrect";

 Class<?>[] groups() default {};

 Class<? extends Payload>[] payload() default {};
}

  As you can see above, in the validatedBy attribute, you can define a validation class. the validator class needs to implement the javax.validation.ConstraintValidator interface.


import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;

public class HasCorrectAppIdValidator implements ConstraintValidator<HasCorrectAppId, PushRegistrationRequest> {

 @Override
 public void initialize(HasCorrectAppId constraintAnnotation) {
  // do nothing
 }

 @Override
 public boolean isValid(PushRegistrationRequest value, ConstraintValidatorContext context) {
  if (IOS.equals(value.getOs()) && value.getAppId().length() != 64) {
   String errorMessage = "length must be 64 for " + ClientOperationSystem.IOS;
   createErrorInContext(context, errorMessage);
   return false;

  } else if (ANDROID.equals(value.getOs()) && value.getAppId().length() < 65) {
   String errorMessage = "length must be greater than 65 for " + ClientOperationSystem.ANDROID;
   createErrorInContext(context, errorMessage);
   return false;
  }
  return true;
 }

 private void createErrorInContext(ConstraintValidatorContext context, String errorMessage) {
  context.disableDefaultConstraintViolation();
  context.buildConstraintViolationWithTemplate(errorMessage)
    .addPropertyNode("appId")
    .addConstraintViolation();
 }
}


That's it. In order to use tell to the framework to use the validator is, to annotate the parameter class with the new annotation.




@HasCorrectAppId
public class PushRegistrationRequest implements Serializable {
    ...
}


2017. szeptember 14., csütörtök

Create Spring service used only in test environment

Create Spring service used only in test environment

In my project, we use three different environments for running the application.
  • Production
  • SIT (system integration)
  • Local development
In the production environment thee is a third party system, that authenticates the user and automatically adds a header value to the request.

I wanted to implement a feature, which always requires the authentication information. So I needed to add mocked information into the REST request header in SIT and Local development environment.

I used a Service to process the request, and created additional subclasses for Local development and SIT environments. The subclasses must not be annotated as service. Otherwise you get exception at startup, marking that your service class in not unique.

The configuration of my local development environment looks like this:

@Configuration
@EnableScheduling
@EnableAsync
@EnableAspectJAutoProxy
@EnableTransactionManagement
@Profile("backend_localdev")
@EnableConfigurationProperties(DaoConfiguration.class)
public class BackendConfigurationLocalDev extends BackendConfiguration {

 @SuppressWarnings("unused")
 @Autowired
 private DaoConfiguration daoConfiguration;

 @Bean(value = "crbService")
 public CrbService getCrbService() {
  return new CrbServiceLocalDev();
 }
}


In order to add header values to the REST request, I implemented a HttpServletRequestWrapper class.

My service for local development looks like this:


public class CrbServiceLocalDev extends CrbService {

 // Logger instance
 private static final Logger log = LoggerFactory.getLogger(CrbServiceLocalDev.class);

 private static final String HARDCODED_CRB_USER_STATUS_HEADER_VALUE = "{\"key\":\"value\"}";

 @Override
 public CrbUserInfo createCrbUserInfo(HttpServletRequest request) {
  log.warn("Entering CrbServiceLocalDev.createCrbUserInfo(). Must be used only for local development");

  HttpServletRequestWrapper wrapper = new LocalDevHttpServletRequestWrapper(request);
  return super.createCrbUserInfo(wrapper);
 }

 private class LocalDevHttpServletRequestWrapper extends HttpServletRequestWrapper {

  public LocalDevHttpServletRequestWrapper(HttpServletRequest request) {
   super(request);
  }

  @Override
  public String getHeader(String name) {

   if (StringUtils.isBlank(name)) {
    return super.getHeader(name);
   }

   final String value = getRequest().getParameter(name);
   if (!StringUtils.isBlank(value)) {
    return value;
   }

   if (MobileAppCrbUserStatus.HEADER_PARAM_X_CRB_USERSTATUS.equals(name)) {
    log.warn("Hardcoded userstatus header value is returned: {}", HARDCODED_CRB_USER_STATUS_HEADER_VALUE);
    return HARDCODED_CRB_USER_STATUS_HEADER_VALUE;
   }

   return super.getHeader(name);
  }
 }

}