Multitenancy


Definition

A multitenancy application is a software that can be utilized by more than one customers on single instance at the same time.

Single-tenancy

Single tenancy

Multi-tenancy

Multi tenancy

Requirements

The application should have following criterion to prove that it’s a multi tenancy product:

Customization

  • Extension on data model: this enable customer to add some additional fields on the data model. The added fields base on their own business needs. For operating purpose, the application needs the model must have some mandatory fields. Otherwise, rely to the business to have some added fields…. We are use JSON it’s a good way to extend the number of information
  • Workflow: They could be a sort of pre-defined workflows or the system must have a dynamic mechanism to allow customer to define their own business workflow. In our case, we use dynamic mechanism (BPMN with Inubit). That’s fine.
  • Access Control: customer should have a mechanism to manage their own users and permissions.
  • Branding: allow each customer to customize the look-and-feel to match their corporate branding. I think this is not a big deal. This capability depends on how much the application can be customized by customer.

 

Isolation:

  • Tenant: each tenant has its own domain, which other tenants cannot access
  • Data: each tenant can manage their own data in isolated manner. In RDBMS case, we can use
    • Separate database: each tenant has his own database to store the its data in multiple different schemas. In this case, application easily manage separate data-source object (connections, transactions, ORM) for a given tenant.
    • Separate schema: each tenant only occupies a sort of dedicated schemas.
    • Shared schema: different tenants could share a list of tables they located. The tables should have anything called discriminator to indicated which record associated to which tenant.
  • Execution: this mentions to the business workflow of a given tenant cannot be triggered or inhibited by other tenant.
  • Performance: No tenant has an impact on the performance of another tenant
Advertisements
Categories: Others

When should you consider using ESB


In today’s world, application development is mostly about application reuse and integration than green field application development. Given the volume of IT assets an organization has and the amount of those exposed as services or APIs to be reused over the past decade or so, thanks to SOA and Web Services initiatives, integration is a very common case.

Therefore, whenever you want to consider any project, you need to think about the integration and reuse of services and APIs with the ESB pattern. Thus, the realities of modern day application development would be that:

  • You are integrating three or more services
  • You will have to leave provision for incremental plugging in of applications in the future
  • You will have to support more than one message format or media type
  • You will have to connect and consume services using multiple communication protocols
  • You will need to deal with in-flight message modifications and pick destinations to route messages based on content
  • You will need to expose your application as services or APIs to be consumed by other applications

Hence, you need an ESB because an ESB comes packed with features to cater to these requirements. ESB provides you with a solution development model that ensures your development team adheres to ESB best practices when realizing your solution.

Categories: Java Tags: , ,

Define an output package for Spring Boot application with Assembly plugin


 

Add assembly maven plugin


		${project.artifactId}
		
			
				org.springframework.boot
				spring-boot-maven-plugin
				
					
						
							repackage
						
					
				
			
			
				maven-assembly-plugin
				
					true
					
						${project.basedir}/src/assembly/assembly.xml
					
				
				
					
						create-archive
						package
						
							single
						
					
				
			
		
	

Read more…

Categories: Java Tags: ,

Notions in Apache Kafka

April 23, 2018 Leave a comment

Producer

Producers create new messages. In other publish/subscribe systems, these may be called publishers or writers

Producer concern flowing things:

  • Generate the message (what to send ?)
  • Serialize the message to Kafka format (what is the format of data ?)
  • Send to topic and partitions (to where the message are sent ? )

Consumer

Consumers read messages. In other publish/subscribe systems, these clients may be called subscribers or readers.

Kafka doesn’t track the acknowledgment from consumers the way many JMS queues do.

To consume message from Kafka we need to specify following parameters:

  • Topic name
  • One broker

Consumer Group

Each topic partition is only consumed by one consumer of a group

Number of consumer cannot be larger than number of partition

Partition Rebalance

Poll Loop

Handle following actions:

  • Coordination
  • Partition rebalance
  • Hearth beat
  • Data fetching

Commit

The action of updating current position in the partition is commit

 

Categories: Java Tags: ,

Differences between HTTP 1.0 vs HTTP 1.1

April 18, 2018 Leave a comment

Host field

HTTP 1.1 requires Host field by Spec

HTTP 1.0 doesn’t officially require Host header

GET / HTTP/1.1
Host: www.blahblahblahblah.com

Persistent connection

HTTP 1.0 considers a connection is persistent unless keepalive header is included

HTTP 1.1 considers all connections are persistent

Method OPTIONS

HTTP 1.1 introduces new method named OPTIONS

HTTP 1.0 doesn’t support this method

Response code 100

HTTP 1.1 uses response code 100 to continue the body and avoid big request payload

HTTP 1.0 doesn’t support this response code

Categories: Others Tags:

Spring Data JPA listen to Postgres notification

April 11, 2018 Leave a comment

Create trigger function

-- FUNCTION: public.table_update_notify()

-- DROP FUNCTION public.table_update_notify();

CREATE FUNCTION public.table_update_notify()
    RETURNS trigger
    LANGUAGE 'plpgsql'
    COST 100
    VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE
  id bigint;
BEGIN
  IF TG_OP = 'INSERT' OR TG_OP = 'UPDATE' THEN
    id = NEW.id;
  ELSE
    id = OLD.id;
  END IF;
  PERFORM pg_notify('employee_channel', json_build_object('table', TG_TABLE_NAME, 'id', id, 'type', TG_OP)::text);
  RETURN NEW;
END;

$BODY$;

ALTER FUNCTION public.table_update_notify()
    OWNER TO postgres;

Create a trigger into table name employee

-- Trigger: employee_notify_update

-- DROP TRIGGER employee_notify_update ON production.employee;

CREATE TRIGGER employee_notify_update
    BEFORE INSERT OR DELETE OR UPDATE
    ON production.employee
    FOR EACH ROW
    EXECUTE PROCEDURE public.table_update_notify();

Now every time, table employee changed, it fire an event with payload is information of the change

Initialize the Spring Data JPA with spring boot starter

Categories: Others Tags: , ,

Spring Data JPA with multiple datasource


Requisition

  • Spring Boot 1.5.11.RELEASE
  • Spring Data JPA 1.11.11.RELEASE
  • Hibernate 5.3.5.Final
  • Postgres 9.4

When using Spring Data JPA together with Spring Boot. It’s easily to realize that Spring Boot JDBC help to initialize one primary data source.

But in reality, one application usually need more than one data source. Especially with legacy systems, this kind of system is usually a centralized of all business logic implementations from customer. For instance,  We consider following scenario:

spring-data-jpa-multiple-datasource

Management System  connects to both Office department database and  People department

The following steps are sufficient for you to connect to one additional data source:

Exclude the default initalized data source from Spring Boot JDBC

@SpringBootApplication(exclude = { DataSourceAutoConfiguration.class })
public class Application {

    public static void main(String[] args) {
        SpringApplication
            .run(Application.class, args);
    }
}

Configure data source Office

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "officeEntityManagerFactory", transactionManagerRef = "officeTransactionManager", basePackageClasses = {
    DeviceRepository.class })
public class OfficeDataSourceConfiguration {

    @Bean(name = "officeEntityManagerFactory")
    public LocalContainerEntityManagerFactoryBean entityManagerFactory(
        final EntityManagerFactoryBuilder builder,
        @Qualifier("officeDataSource") final DataSource dataSource) {
        return builder.dataSource(dataSource).packages(Device.class)
            .persistenceUnit("office").build();
    }

    @Bean(name = "officeDataSource")
    @ConfigurationProperties(prefix = "datasource.office")
    public DataSource dataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean(name = "officeTransactionManager")
    public PlatformTransactionManager transactionManager(
        @Qualifier("officeEntityManagerFactory") final EntityManagerFactory entityManagerFactory) {
        return new JpaTransactionManager(entityManagerFactory);
    }
}

Configure data source People

@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(entityManagerFactoryRef = "peopleEntityManagerFactory", transactionManagerRef = "peopleTransactionManager", basePackageClasses = {
    DepartmentRepository.class })
public class PeopleDataSourceConfiguration {

    @Primary
    @Bean(name = "peopleEntityManagerFactory")
    public LocalContainerEntityManagerFactoryBean entityManagerFactory(
        final EntityManagerFactoryBuilder builder,
        @Qualifier("peopleDataSource") final DataSource dataSource) {
        return builder.dataSource(dataSource).packages(Department.class)
            .persistenceUnit("people").build();
    }

    @Primary
    @Bean(name = "peopleDataSource")
    @ConfigurationProperties(prefix = "datasource.people")
    public DataSource dataSource() {
        return DataSourceBuilder.create().build();
    }

    @Primary
    @Bean(name = "peopleTransactionManager")
    public PlatformTransactionManager peopleTransactionManager(
        @Qualifier("peopleEntityManagerFactory") final EntityManagerFactory entityManagerFactory) {
        return new JpaTransactionManager(entityManagerFactory);
    }
}

Update configuration in application.yml

spring:
    application:
        name: spring-data-jpa-multiple-datasource
    profiles:
        active: default
    data.jpa.repositories.enabled: true
    jpa:
        database-platform: POSTGRESQL
        generate-ddl: true
        open-in-view: true
        show-sql: false
        hibernate.ddl-auto: update
        properties:
            hibernate:
                dialect: org.hibernate.dialect.PostgreSQL94Dialect
                default_schema: production

datasource:
    people:
        driverClassName: org.postgresql.Driver
        url: jdbc:postgresql://10.10.15.171:5432/people?currentSchema=production
        username: postgres
        password: postgres
        type: org.apache.tomcat.jdbc.pool.DataSource
        jmx-enabled: true
        initialSize: 2
        maxActive: 100
        maxIdle: 5
        minIdle: 2
        maxWait: 600000
        testOnBorrow: true
        validationQuery: select 1
        minEvictableIdleTimeMillis: 60000
        removeAbandoned: true
        removeAbandonedTimeout: 60000
        testWhileIdle: true
        timeBetweenEvictionRunsMillis: 60000

    office:
        driverClassName: org.postgresql.Driver
        url: jdbc:postgresql://10.10.15.171:5432/office?currentSchema=production
        username: postgres
        password: postgres
        type: org.apache.tomcat.jdbc.pool.DataSource
        jmx-enabled: true
        initialSize: 2
        maxActive: 100
        maxIdle: 5
        minIdle: 2
        maxWait: 600000
        testOnBorrow: true
        validationQuery: select 1
        minEvictableIdleTimeMillis: 60000
        removeAbandoned: true
        removeAbandonedTimeout: 60000
        testWhileIdle: true
        timeBetweenEvictionRunsMillis: 60000

Source code sample

%d bloggers like this: