30 Ocak 2023 Pazartesi

JobRunr Kullanımı

Giriş
JobRunr, Quartz üzerine inşa edilmiş bir kütüphane. Açıklaması şöyle
JobRunr requires a datastore to store the jobs and its metadata, ...
Maven
Şu satırı dahil ederiz
<dependency>
  <groupId>com.jobrunr</groupId>
  <artifactId>jobrunr-spring-boot-starter</artifactId>
  <version>5.3.3</version>
</dependency>
application.properties
Örnek
Şöyle yaparız
jobrunr.enabled=true

org.jobrunr.dashboard.enabled=true 
org.jobrunr.dashboard.port=8000
Örnek
Şöyle yaparız
org.jobrunr.background-job-server.enabled=true
org.jobrunr.dashboard.enabled=true
Monitoring jobs
Açıklaması şöyle
JobRunr provides a dashboard that allows you to monitor the status of your jobs. You can access the dashboard by navigating to http://localhost:8000/dashboard in your browser.
1. Fire And Forget Job
enqueue metodu kullanılır  Açıklaması şöyle
These jobs are executed once, and the result is not returned to the caller. This is useful for tasks that don’t require any feedback or response from the job. You can create a fire-and-forget job using the BackgroundJob.enqueue() method, which accepts a lambda expression as a parameter. 

2. Scheduled Job
schedule metodu kullanılır. Açıklaması şöyle
These jobs are executed at a specified time or interval. You can create a scheduled job by using the BackgroundJob.schedule() method, and it accepts a lambda expression, or you can inject an instance of a class.
Örnek
Şöyle yaparız
@Service
public class MyJobs {

  private final JobScheduler jobScheduler;

  @Autowired
  public MyJobs(JobScheduler jobScheduler) {
    this.jobScheduler = jobScheduler;
  }

  @Scheduled(cron = "1 1 19 * * ? *")
  public void scheduleJob() {
    jobScheduler.schedule("once at 5pm", () -> doSomething());
  }
}
3. Recurring jobs
 Açıklaması şöyle
These jobs are executed on a recurring basis. You can create a recurring job using the BackgroundJob.scheduleRecurringly() method, which accepts a cron expression or a Cron object.
4. Adding loggers and progress bars
Açıklaması şöyle
Loggers: JobRunr supports logging, allowing you to log messages from your background jobs. You can add loggers by using the JobContext.logger() method. You can easily inject JobContext it into your background jobs by adding it as a parameter to your lambda expression.
Örnek
Şöyle yaparız
BackgroundJob.enqueue(() -> myService.doWork(JobContext.Null))

// Actual implementation
public class MyService {
    public void doWork(JobContext jobContext) {
        jobContext.logger().info("Hello from JobRunr!");
    }
}
Açıklaması şöyle
Progress bars: You can report the accurate progress of your background jobs using the JobContext.progress() method. The progress bar is visible in the JobRunr dashboard, which will help users understand the status of their jobs. progressBar is available as part of the same JobContext object, so we can use it the same way as the logger.
Örnek
Şöyle yaparız
public class MyService {
  public void doWork(JobContext jobContext) {
    JobDashboardProgressBar progressBar = jobContext.progressBar(1000);
    for(int i = 0; i < 1000; i++) {
      // or use progressBar.setValue(i) to set the value directly
      progressBar.increaseByOne(); 
    }
  }
}
5. Deleting jobs
Açıklaması şöyle
Sometimes, you may want to delete the scheduled or recurring job. You can do this by using the BackgroundJob.delete() method. 
Örnek
Şöyle yaparız
JobId jobId = BackgroundJob.<EmailSender>enqueue(x -> x.doWork());
BackgroundJob.delete(jobId);


















SpringCache CacheErrorHandler - Redis Server Inaccessibility Yani Redis Erişimindeki Hatalar İçindir


Giriş
Şu satırı dahil ederiz 
import org.springframework.cache.interceptor.CacheErrorHandler;
Açıklaması şöyle
We define a custom error handler to handle any exceptions occurred during any Redis command execution. This helps to connect to the actual source of data instead of throwing an exception if the Redis command execution failed.
CachingConfigurer arayüzünü gerçekleştiren sınıfımızın errorHandler() metodunu override etmek gerekir

Örnek
Şöyle yaparız
@Override
public CacheErrorHandler errorHandler() {
  return new CacheErrorHandler() {
   @Override
   public void handleCacheGetError(RuntimeException exception, Cache cache, 
     Object key) {
   }

   @Override
   public void handleCachePutError(RuntimeException exception, Cache cache, 
    Object key, Object value) {
   }

    @Override
    public void handleCacheEvictError(RuntimeException exception, Cache cache,
      Object key) {
    }

    @Override
    public void handleCacheClearError(RuntimeException exception, Cache cache) {
    }
  };
}
Örnek
Şöyle yaparız
import org.springframework.cache.annotation.CachingConfigurer;
import org.springframework.cache.interceptor.CacheErrorHandler;
import org.springframework.context.annotation.Configuration;

@Configuration
public class CachingConfiguration implements CachingConfigurer {

  @Override
  public CacheErrorHandler errorHandler() {
    return new CustomCacheErrorHandler();
  }
}

import org.springframework.cache.interceptor.CacheErrorHandler;

public class CustomCacheErrorHandler implements CacheErrorHandler {

  @Override
  public void handleCacheGetError(RuntimeException exception, Cache cache, Object key) {
  }

  @Override
  public void handleCachePutError(RuntimeException exception, Cache cache, Object key,
    Object value) {
  }

  @Override
  public void handleCacheEvictError(RuntimeException exception, Cache cache, Object key) {
  }

  @Override
  public void handleCacheClearError(RuntimeException exception, Cache cache) {
  }
}



27 Ocak 2023 Cuma

SpringScheduling @Scheduled Anotasyonu Unit Test

Örnek
Kod şöyle olsun
@Component
public class MyScheduler {

    private final MyService myService;

    @Scheduled(fixedDelayString = "${app.scheduler.delay-ms}")
    public void run() {
        myService.call();
    }
}
Şöyle yaparız
@ExtendWith(SpringExtension.class)
@SpringBootTest( classes = { MyApplication.class, MySchedulerIntegrationTest.MySchedulerTestConfig.class }, properties = { "app.scheduler.delay-ms=200" }) @ActiveProfiles("local") class MySchedulerIntegrationTest { // Dependency injected into scheduler, we mock it to verify is being called. @MockBean private MyService myService; @Test void shouldStartScheduler() { await().atMost(300, TimeUnit.MILLISECONDS) .untilAsserted(() -> { verify(myService, times(2)).call(); }); } @Configuration static class MySchedulerTestConfig { // If you need to mock anything in your context, add here using @MockBean. } }


22 Ocak 2023 Pazar

SpringTest Testcontainers MariaDBContainer

Gradle
Örnek
Şu satırı dahil ederiz
testImplementation "org.testcontainers:testcontainers:1.17.3"
testImplementation "org.testcontainers:mariadb:1.17.3"
application-test.yml Dosyası
Örnek
Şöyle yaparız
spring:
datasource: driver-class-name: org.testcontainers.jdbc.ContainerDatabaseDriver url: jdbc:tc:mariadb:10.5:///?TC_DAEMON=true jpa: properties: hibernate: dialect: org.hibernate.dialect.MariaDB103Dialect




15 Ocak 2023 Pazar

SpringKafka Consumer ConcurrentKafkaListenerContainerFactory.setRecordFilterStrategy metodu - Consumer Mesajları Filtreler

Giriş
Şu satırı dahil ederiz
import org.springframework.kafka.listener.adapter.RecordFilterStrategy;
Filtreyi geçemeyen mesajları listener göremez.

Örnek
Şöyle yaparız
@Bean(name = "farLocationContainerFactory")
public ConcurrentKafkaListenerContainerFactory<Object, Object> factory( ConcurrentKafkaListenerContainerFactoryConfigurer configurer) { var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>(); configurer.configure(factory, consumerFactory()); factory.setRecordFilterStrategy(new RecordFilterStrategy<Object, Object>() { @Override public boolean filter(ConsumerRecord<Object, Object> consumerRecord) { try { CarLocation location = objectMapper.readValue(consumerRecord.value().toString(), CarLocation.class); return location.getDistance() <= 100; } catch (JsonProcessingException e) { return false; } } }); return factory; }

SpringKafka Consumer DeadLetterPublishingRecoverer Sınıfı

Giriş
Şu satırı dahil ederiz
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
Şöyle yaparız
@Bean(name = "invoiceDltContainerFactory")
public ConcurrentKafkaListenerContainerFactory<Object, Object> listenerContainerFactory(
  ConcurrentKafkaListenerContainerFactoryConfigurer configurer, 
  KafkaTemplate<String, String> kafkaTemplate) {


  var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
  configurer.configure(factory, consumerFactory());

  
  var recoverer = new DeadLetterPublishingRecoverer(kafkaTemplate, 
    (record, ex) -> new TopicPartition("t-invoice-dead", record.partition()));

  factory.setCommonErrorHandler(new DefaultErrorHandler(recoverer, new FixedBackOff(1000,5)));

  return factory;
}


11 Ocak 2023 Çarşamba

SpringSession Redis

Giriş
Açıklaması şöyle
How does this work?
1. We inform Spring that sessions will now be cached in Redis.
2. Spring receives a request.
3. Spring Security kicks in and user is authenticated.
4. Spring Session object is serialized and saved in the cache.
5. Client gets a cookie with the Session ID.
6. Client then sends the session id for further requests.
7. Any instance of the UI Service will check in the cache for a session object against the Session ID provided by the client.
8. Session object is de-serialized and reused.
Gradle
Şöyle yaparız
dependencies {
  compile 'org.springframework.boot:spring-boot-starter-data-redis'
  compile("org.springframework.boot:spring-boot-starter-cache")
  implementation('org.springframework.session:spring-session-data-redis')
}
1. @EnableRedisHttpSession Anotasyonu
Açıklaması şöyle
This annotation when parsed, creates a Spring Bean with the name of springSessionRepositoryFilter that implements Filter. The filter is in charge of replacing the HttpSession implementation to be backed by Spring Session. In this instance, Spring Session is backed by Redis.
Örnek
Şöyle yaparız
import org.springframework.session.data.redis.config.annotation.web.http.EnableRedisHttpSession;

@EnableRedisHttpSession
@EnableEurekaClient
@SpringBootApplication
public class UIApplication {

  public static void main(String[] args) {
    SpringApplication.run(UIApplication.class, args);
  }
}
2. application.properties
Şöyle yaparız
spring.cache.type=redis
spring.redis.host=<ip-address>
spring.redis.port=<Redis port>



SpringSession Kullanımı

Giriş
Session bilgisinin Spring dışında başka bir yerde saklanması içindir. 
SpringSession ile kullanılabilecek şeyler şunlar

Nasıl Çalışır
Açıklaması şöyle
How does Centralized Sessions work in Spring?
A web server creates a HTTP Session for spring to work on and save authentication details and other user/request specific details. The server then send the Session ID back to the client in a cookie.

This works fine, if you have a single instance of an application. All hell breaks loose when microservices come into the picture. To mitigate this Spring came up with Spring Session.

Spring Session makes it trivial to support clustered sessions without being tied to an application container specific solution.

It replaces the HttpSession in an application container (i.e. Tomcat) in a neutral way, with support for providing session IDs in headers to work with RESTful APIs.

We need to save this session somewhere that is common to every instance. And that common place should be very fast to return back the details of the session.

So, we need a cache. But what kind? Database or In memory?

Both have their pros and cons. Database is cheaper on the storage but is slow. In-memory cache while fast will have to work with a limited amount of RAM.

If your concurrent users aren’t in the millions and you have decent enough servers then In-memory caching is the better solution here.
Açıklaması şöyle
But what is really under the hood and what is really happening when we are using session data mongo? In fact, the majority of this magic is being done by the SessionRepositoryFilter. If you track down the HTTP request and see where actually the session object is created you will notice multiple things:

The HttpServletRequest is wrapped by the SessionRepositoryFilter, which also overrides the methods for obtaining a HttpSession. SessionRepositoryFilter will check the validity of the token first and also:

- Will check if any cookie is present and will load the session data from the store
- Will convert the HttpSession into a MongoSession
- will update session data in the store

There are a lot of different dependencies available that you can use based on the store that you are using or are more comfortable with.
Yani SpringSession kullanıyorsak HttpSession veya HttpServletRequest kullanılsak bile aslında bu başka bir sınıf ile sarmalanmıştır

Örnek
Şöyle yaparız
@Slf4j
@Controller
public class TestController {

  @RequestMapping("mongodb-session")
  public String getSession(HttpSession session){
    if ( session.getAttribute("counter") == null ){
      session.setAttribute("counter" , 1 );
      log.info( "New user");
    } else {
      log.info( "visit count : " + session.getAttribute("counter")  );
      session.setAttribute("counter" , (int) session.getAttribute("counter") + 1 );
    }
    return "mongodb-session.html";
  }
}



10 Ocak 2023 Salı

SpringData JpaRepository.findById - Anti-Pattern Kullanmayın

Giriş
@Id olarak işaretli alana göre arama yapar. 

Problem1 - findById() metodunun Döngü İçinde Kullanılması
Eğer birden fazla nesne almak istiyorsak findAllById() kullanılır
Örnek
Şöyle yaparız
@Service
public class UserService {

  @Autowired
  private UserRepository userRepository;

  public List<User> getUsersByIds(List<Long> ids) {
    return userRepository.findAllById(ids);
  }
}
Üretilen SQL şöyle
SELECT * FROM User user WHERE user.id IN :ids
Problem 2
Açıklaması şöyle. Yani findById() lazy değildir. OneToMany, ManyToOne gibi ilişkilerde diğer taraftaki nesneleri de getirir.
Another issue with findById is that it can lead to the creation of many unnecessary objects. Each time you call findById, Spring Data JPA creates a new entity object, even if the entity is already in the persistence context. This can lead to a significant increase in memory usage and garbage collection overhead.
findById() yerine getReferenceById() tercih edilmeli
getReferenceById() metodunun kodu şöyle. Yeni bir proxy dönüyor
public T getReferenceById(ID id) {
    Assert.notNull(id, "The given id must not be null!");
    return this.em.getReference(this.getDomainClass(), id);
}
getReferenceById metodunun hikayesi şöyle. Yani önce getOne() -> daha sonra getById() -> daha sonra getReferenceById() haline gelmiş.
Initially, Spring Data JPA offered a getOne method that we should call in order to get an entity Proxy. But we can all agree that getOne is not very intuitive.

So, in the 2.5 version, the getOne method got deprecated in favor of the getById method alternative, that’s just as unintuitive as its previous version.

Neither getOne nor getById is self-explanatory. Without reading the underlying Spring source code or the underlying Javadoc, would you know that these are the Spring Data JPA methods to call when you need to get an entity Proxy?

Therefore, in the 2.7 version, the getById method was also deprecated, and now we have the getReferenceById method instead,...
Örnek - OneToMany İlişki
Elimizde şöyle bir kod olsun. Department nesnesi Employee nesnesine OneToMany ile bağlı. Employee nesnesi de Department nesnesine ManyToOne ile bağlı
@Entity
@Table(name = "departments")
public class Department {

  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Long id;

  private String name;

  @OneToMany(mappedBy = "department", fetch = FetchType.LAZY)
  private List<Employee> employees = new ArrayList<>();

  // getters and setters
}

@Entity
@Table(name = "employees")
public class Employee {

  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Long id;

  private String name;

  @ManyToOne(fetch = FetchType.LAZY)
  @JoinColumn(name = "department_id")
  private Department department;

  // getters and setters
}
Şöyle kullanalım
@Service
public class DepartmentService {

  @Autowired
  private DepartmentRepository departmentRepository;

  public Department getDepartmentById(Long id) {
    return departmentRepository.findById(id)
    .orElseThrow(() -> new EntityNotFoundException("Department not found with id " + id));
  }
}
Üretilen SQL şöyle
SELECT * FROM departments WHERE id = ?

SELECT * FROM employees WHERE department_id = ?
Düzeltmek için şöyle yaparız. Burada getOne() kullanılıyor ancak getReferenceById() de aynı şey zaten
@Service
public class DepartmentService {

  @Autowired
  private DepartmentRepository departmentRepository;

  public Department getDepartmentReferenceById(Long id) {
    return departmentRepository.getOne(id);
  }
}
Üretilen SQL şöyle
SELECT department FROM Department department WHERE department.id = ?
Açıklaması şöyle
Note that the actual SQL query to fetch the related Employee objects will only be executed when we access the employees property of the Department object, or call a method on one of its related employees that requires database access.
Örnek - ManyToOne İlişki
Elimizde şöyle bir kod olsun
@Entity
@Table(name = "post")
public class Post {
 
  @Id
  private Long id;
 
  private String title;
 
  @NaturalId
  private String slug;
}

@Entity
@Table(name = "post_comment")
public class PostComment {
 
  @Id
  @GeneratedValue
  private Long id;
 
  private String review;
 
  @ManyToOne(fetch = FetchType.LAZY)
  private Post post;
}
Şöyle kullanalım. Aslında yeni bir PostComment nesnesi ekliyoruz ve bunun için ilişkili olduğu Post nesnesine ihtiyaç var.
@Transactional(readOnly = true)
public class PostServiceImpl implements PostService {
 
  @Autowired
  private PostRepository postRepository;
 
  @Autowired
  private PostCommentRepository postCommentRepository;
 
  @Transactional
  public PostComment addNewPostComment(String review, Long postId) {           
    PostComment comment = new PostComment()
      .setReview(review)
      .setPost(postRepository.findById(postId)
        .orElseThrow(
                   ()-> new EntityNotFoundException(
                     String.format("Post with id [%d] was not found!", postId)
                    )
                )
            );
 
postCommentRepository.save(comment);
    return comment;
  }
}
Üretilen SQL şöyle
SELECT
  post0_.id AS id1_0_0_,
  post0_.slug AS slug2_0_0_,
  post0_.title AS title3_0_0_
FROM
  post post0_
WHERE
  post0_.id = 1
 
SELECT nextval ('hibernate_sequence')
 
INSERT INTO post_comment (
  post_id,
  review,
  id
)
VALUES (
  1,
  'Best book on JPA and Hibernate!',
  1
)
Kodu şöyle yapalım. Burada artık getReferenceById() kullanılıyor
@Transactional
public PostComment addNewPostComment(String review, Long postId) {
  PostComment comment = new PostComment()
    .setReview(review)
    .setPost(postRepository.getReferenceById(postId));
 
  postCommentRepository.save(comment);
 
  return comment;
}
Üretilen SQL şöyle
SELECT nextval ('hibernate_sequence')
 
INSERT INTO post_comment (
    post_id,
    review,
    id
)
VALUES (
    1,
    'Best book on JPA and Hibernate!',
    1
)




9 Ocak 2023 Pazartesi

SpringBoot spring.jpa Hibernate'e Özel Ayarlar - Batch Ayarları

Giriş
Bulk ve Batch farklı şeyler. Açıklaması şöyle
BULK INSERTS: Is the process of inserting a huge number of rows in a database table at once (one or many transactions).
Batching: allows us to send a group of SQL statements to the database in a single transaction; it aims to optimize network and memory usage; so instead of sending each statement by itself we send a group of statements.
1. Kısaca
Şunlar kullanılır
1. hibernate.generate_statistics
2. hibernate.order_inserts : Hibernate SQL cümlelerinden değişiklikler yapar 
3. hibernate.order_updates : Hibernate SQL cümlelerinden değişiklikler yapar 
4. hibernate.flushMode

5. jdbc.batch_size :bir transaction içinde belirtilen sayıdaki statement'ı gönderebilir.
6. jdbc.fetch_size
7. JDBC URL ile de ?rewriteBatchedStatements=true

Örnek
Şöyle yaparız
spring:
  jpa:
    properties:
      hibernate:
        order_inserts: true
        order_updates: true
        jdbc:
          batch_size: 100
          batch_versioned_data: true
Örnek
Şöyle yaparız
spring.datasource.hikari.data-source-properties.rewriteBatchedStatements=true
veya şöyle yaparız
spring.datasource.url=jdbc:mysql://localhost:32803/db?rewriteBatchedStatements=true

2. Detaylı Açıklamalar

spring.jpa.properties.hibernate.generate_statistics Alanı
Tüm ayar değişikliklerinden sonra ölçüm yapmak lazım.

Örnek
Şöyle yaparız. Burada 1 tane JDBC bağlantısı alınıyor ve 1233 tane statement çalıştırılıyor
StatisticalLoggingSessionEventListener : Session Metrics {
    78588493 nanoseconds spent acquiring 1 JDBC connections;
    0 nanoseconds spent releasing 0 JDBC connections;
    208607581 nanoseconds spent preparing 1233 JDBC statements;
    6474843328 nanoseconds spent executing 1233 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    6966643471 nanoseconds spent executing 2 flushes (flushing a total of 2466 entities and 1000 collections);
    0 nanoseconds spent executing 0 partial-flushes (flushing a total of 0 entities and 0 collections)
}

Örnek
Ölçüm yapmak için şöyle yaparız
spring:
  jpa:
    properties:
      hibernate:
        generate_statistics: true
Önce çıktı şöyle
Session Metrics {
    1272500 nanoseconds spent acquiring 1 JDBC connections;
    0 nanoseconds spent releasing 0 JDBC connections;
    92831400 nanoseconds spent preparing 49207 JDBC statements;
    18557329900 nanoseconds spent executing 49207 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    21229826900 nanoseconds spent executing 1 flushes (flushing a total of 49043 entities and 223 collections);
    0 nanoseconds spent executing 0 partial-flushes (flushing a total of 0 entities and 0 collections)
}
Sonra çıktı şöyle. Burada artık JDBC batch kullanıldığı görülüyor
Session Metrics {
    872300 nanoseconds spent acquiring 1 JDBC connections;
    0 nanoseconds spent releasing 0 JDBC connections;
    6031200 nanoseconds spent preparing 168 JDBC statements;
    103321900 nanoseconds spent executing 165 JDBC statements;
    734107200 nanoseconds spent executing 14 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    1737581300 nanoseconds spent executing 1 flushes (flushing a total of 49043 entities and 223 collections);
    0 nanoseconds spent executing 0 partial-flushes (flushing a total of 0 entities and 0 collections)
}
hibernate.flushMode
Açıklaması şöyle. Tam ne işe yarıyor bilmiyorum
... flushing is the synchronization of the state of your database with state of your session
... the flushing time is the time spent synchronizing the state of entities in memory with the state of this entities in the database;
Örnek
Şöyle yaparız
spring:
  jpa:
    properties:
      org:
        hibernate:
          flushMode: COMMIT
JDBC Bağlantı Parametreleri

reWriteBatchedInserts Parametresi
Genel açıklama şöyle. Yani ayrı ayrı olan bir çok INSERT cümlesini tek bir INSERT + çoklu VALUES haline getirir
... there are bulk (multi-row) insert query options available in most of the mainstream database solutions (Postgres, MySQL, Oracle). With syntax like:
insert into myschema.my_table (col1, col2, col3) 
values
(val11, val12, val13),
(val21, val22, val23),
....
(valn1, valn2, valn3);
While, Postgres and MySQL do support this features with the help of JDBC flag: reWriteBatchedInserts=true
But unfortunately, according to this resource, ms-sql JDBC driver does not support the multi-row rewrite of the queries.
Açıklaması şöyle
Asking PostgreSQL to Rewrite batched inserts
Hibernate will send a multiple insert statements to RDBMS at once, in order to insert data, and this will be done in the same transaction which is great; however if we are using PostgreSQL we could go a little further in our optimizations by asking him to rewrite those inserts to a single multi-value insert statement; this way if we have a 100 insert statements, it will rewrites them to a single multi-value statement.
Yani şöyle olur
// before Rewrite batched inserts
INSERT INTO container( ...) VALUES (...);
INSERT INTO container( ...) VALUES (...);
....
INSERT INTO container( ...) VALUES (...);
// After PostgreSQL to Rewrite batched inserts INSERT INTO container( ...) VALUES
(...),(...) ..., (...);
MySQL veya PostgreSQL kullanırken reWriteBatchedInserts=true yapmak lazım. Şöyle yaparız
jdbc:postgresql://localhost:5432/mastership?reWriteBatchedInserts=true
MysqlDataSource Sınıfını Bean Olarak Kullanıyorsak
com.mysql.cj.jdbc.MysqlDataSource yazına bakabilirsiniz

MySQL JDBC URL Kullanıyorsak
JDBC MySQL Connection String yazına bakabilirsiniz

hibernate.jdbc.batch_size Alanı
Açıklaması şöyle. Bir transaction içinde belirtilen sayıdaki statement'ı gönderebilir.
Hibernate uses a batch size where it stores statements, before running them in transactions, by increasing it, we will allow hibernate to increase the number of statements that it will send to the database in a single transaction.

Running more statements in a single transaction will result in less transactions and less time, it will also optimize the usage of the network.
- 30 yapılabilir. 4096 yapılabilir. Hangisi iyi denemek lazım
- Bulk Insert iyileştirmeleri yazısı burada
Böylece repository.saveAll() çağrısı bulk şeklinde çalışır. 
Örnek
Şöyle yaparız
spring:
  jpa:
    properties:
      hibernate:
        jdbc:
          batch_size: 4096
hibernate.jdbc.fetch_size Alanı
jdbc.batch_size gibidir ancak SELECT cümleleri için kullanılır. Bir transaction içinde belirtilen sayıdaki statement'ı gönderebilir.
Örnek
Şöyle yaparız
spring:
  jpa:
    properties:
      hibernate:
        jdbc:
          fetch_size: 4096

jdbc.batch_size Alanı ve @GeneratedValue İlişkisi
1. @GeneratedValue(strategy = GenerationType.IDENTITY) kullanmamak lazım. Çünkü bunu kullanınca Hibernate bulk insert yapamıyor. Tercih edilen şey SEQUENCE.
2. Postgre açısından da Primary key için kullanılan alan da SERIAL veya BIGSERIAL olmamalı. Çünkü o zaman PostgreSQL kendisi sayı üretmek isteyecektir. Açıklaması şöyle
... hibernate cannot “batch” operations for neither entities having IDs generated by GenerationType.IDENTITY strategy, 
nor entities having SERIAL/BIGSERIAL IDs (in the case of PostgreSQL database).
Şöyle olabilir
public class Book {
  @Id
  @GeneratedValue(strategy = SEQUENCE, generator = "seqGen")
  @SequenceGenerator(name = "seqGen", sequenceName = "seq", initialValue = 1)
  private Long id;
  ...
}
Şöyle olabilir
@Entity
@Table(name = "container")
public class Container implements Serializable {
  @Id
  @GeneratedValue(
    strategy = GenerationType.SEQUENCE,
    generator = "container_sequence"
  )
  @SequenceGenerator(
    name = "container_sequence",
    sequenceName = "container_sequence",
    allocationSize = 300
  )
  @Column(name = "id")
  private Long id;
  ...
  @OneToMany(mappedBy = "container", fetch = FetchType.EAGER)
  @Cascade(CascadeType.ALL)
  private List<Pallet> pallets = new ArrayList<>();
  ...
}
Örnek
Şöyle yaparız.
hibernate.jdbc.batch_size=50 // commit every 50 lines
hibernate.order_inserts=true // order statements to regroup inserts
hibernate.order_updates=true // order statements to regroup updates
Örnek
Şöyle yaparız. Burada hibernate 10 tane insert işlemini tek bir SQL ile yapıyor.
spring.jpa.properties.hibernate.jdbc.batch_size=10
spring.jpa.properties.hibernate.order_inserts=true
spring.jpa.properties.hibernate.order_updates=true

//Java
for (long i = 1; i <= 10; i++) {
  entityManager.persist(
    new Post()
      .setId(i)
      .setTitle(String.format("Post no. %d", i))
  );
}

//Hibernate output
Type:Prepared, Batch:True, QuerySize:1, BatchSize:10,
Query:["
    insert into post (title, id) values (?, ?)
"],
Params:[
    (Post no. 1, 1), (Post no. 2, 2), (Post no. 3, 3),
    (Post no. 4, 4), (Post no. 5, 5), (Post no. 6, 6),
    (Post no. 7, 7), (Post no. 8, 8), (Post no. 9, 9),
    (Post no. 10, 10)
]
hibernate.order_inserts Alanı
Açıklaması şöyle. Yani Hibernate önce bir sınıfın içindeki üye alan olan nesneleri veri tabanına gönderir. Böylece bağımlı nesnenin id'sini daha önce alır
For hibernate to be more efficient in batching (do more batching) especially in a concurrent environment, we will enable order_updates and order_inserts

Ordering by entity will allow hibernate to save entities that are fields in other entities first, and ordering ids will allow it to use the right sequence values in the statements.
Örnek
Şöyle yaparız
spring:
  jpa:
    properties:
      hibernate:
        order_updates: true
        order_inserts: true
hibernate.order_updates Alanı
hibernate.order_inserts ile benzerdir. Hibernate önce bir sınıfın içindeki üye alan olan nesneleri veri tabanına gönderir. Böylece bağımlı nesnenin id'sini daha önce alır
Örnek
Şöyle yaparız
## Hibernate properties
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=false
spring.jpa.open-in-view=false
spring.jpa.properties.hibernate.jdbc.time_zone=UTC
spring.jpa.properties.hibernate.jdbc.batch_size=15
spring.jpa.properties.hibernate.order_inserts=true
spring.jpa.properties.hibernate.order_updates=true
spring.jpa.properties.hibernate.connection.provider_disables_autocommit=true
spring.jpa.properties.hibernate.query.in_clause_parameter_padding=true
spring.jpa.properties.hibernate.query.fail_on_pagination_over_collection_fetch=true
spring.jpa.properties.hibernate.query.plan_cache_max_size=4096
 
logging.level.net.ttddyy.dsproxy.listener=debug
Açıklaması şöyle
The spring.jpa.hibernate.ddl-auto setting is set to none to disable the hbm2ddl schema generation tool since we are using Flyway to manage the database schema automatically.

The spring.jpa.show-sql is set to false to avoid Hibernate printing the SQL statements to the console. As I explained in this article, it’s better to use datasource-proxy for this task. And that’s why we set the logging.level.net.ttddyy.dsproxy.listener property to debug in development mode. Of course, in the production profile, this property is set to info.

The spring.jpa.open-in-view property is set because we want to disable the dreadful Open-Session in View (OSIV) that’s enabled by default in Spring Boot. The OSIV anti-pattern can cause serious performance and scaling issues, so it’s better to disable it right from the very beginning of your project development.

The spring.jpa.properties.hibernate.jdbc.time_zone property sets the default timezone to UTC to make it easier to handle timestamps across multiple timezones. For more details about handling timezones with Spring Boot, check out this article.

To enable automatic JDBC batching, we are setting the three properties:

The first property sets the default batch size to 15 so that up to 15 sets of bind parameter values could be grouped and sent in a single database roundtrip. The next two settings are meant to increase the likelihood of batching when using cascading. Check out this article for more details about this topic.

The spring.jpa.properties.hibernate.connection.provider_disables_autocommit property is the one that instructs Hibernate that the connection pool disables the auto-commit flag when opening database connections. Check out this article for more details about this performance tuning setting.

The spring.jpa.properties.hibernate.query.in_clause_parameter_padding setting increases the likelihood of statement caching for IN queries as it reduces the number of possible SQL statements that could get generated while varying the IN clause parameter list. Check out this article for more details about this optimization.

The spring.jpa.properties.hibernate.query.fail_on_pagination_over_collection_fetch property is set because we want Hibernate to throw an exception in case a pagination query uses a JOIN FETCH directive. Check out this article for more details about this safety option.

The spring.jpa.properties.hibernate.query.plan_cache_max_size property is set to increase the size of the Hibernate query plan cache. By using a larger cache size, we can reduce the number of JPQL and Criteria API query compilations, therefore increasing application performance. Check out this article for more details about this performance-tuning option.