Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: manage version of maven plugin in order to remove build warning #146

Open
wants to merge 39 commits into
base: master
Choose a base branch
from

Conversation

leeyazhou
Copy link

Motivation:

when execute command 'mvn clean install ', the fllowing warnings show
up:

[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-client-api:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 57, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 43, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 50, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-client-log:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 62, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 48, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 55, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-client-impl:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 110, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 96, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 103, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-client-parent:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 105, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 91, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 98, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-server-session:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.springframework.boot:spring-boot-maven-plugin is missing. @ line 91, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-server-data:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.springframework.boot:spring-boot-maven-plugin is missing. @ line 95, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-server-meta:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.springframework.boot:spring-boot-maven-plugin is missing. @ line 102, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-server-integration:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.springframework.boot:spring-boot-maven-plugin is missing. @ line 45, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-distribution-session:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 43, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 29, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 36, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-distribution-data:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 43, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 29, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 36, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-distribution-meta:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 43, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 29, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 36, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-distribution-integration:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 42, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 28, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 35, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.alipay.sofa:registry-distribution:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 47, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-install-plugin is missing. @ line 33, column 21
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 40, column 21
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 

Modification:

just add pluginManagement in the parent registry-parent/pom.xml:

    	<pluginManagement>
    		<plugins>
    		    <plugin>
	                <groupId>org.apache.maven.plugins</groupId>
	                <artifactId>maven-compiler-plugin</artifactId>
	                <version>3.7.0</version>
	            </plugin>
	            <plugin>
	                <groupId>org.apache.maven.plugins</groupId>
	                <artifactId>maven-install-plugin</artifactId>
	                <version>3.0.0-M1</version>
	            </plugin>
	            <plugin>
	                <groupId>org.apache.maven.plugins</groupId>
	                <artifactId>maven-deploy-plugin</artifactId>
	                <version>3.0.0-M1</version>
	            </plugin>
	            <plugin>
	                <groupId>org.sonatype.plugins</groupId>
	                <artifactId>nexus-staging-maven-plugin</artifactId>
	                <version>1.6.7</version>
	            </plugin>
	            <plugin>
	            	<groupId>org.springframework.boot</groupId>
	                <artifactId>spring-boot-maven-plugin</artifactId>
		        <version>${springboot.version}</version>
	            </plugin>
    		</plugins>
    	</pluginManagement>

Result:

the warnings just disappear.

caojie09 and others added 30 commits March 28, 2019 15:18
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case

* add rest api,getDataInfoIdList,checkSumDataInfoIdList

* add publisher dataInfoList return
Update Travis CI Link Address
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case

* fix jetty version,and fix rest api for dataInfoIds

* fix hashcode test

* fix working to init bug

* fix start task log

* fix Watcher can't get providate data,retry and finally return new

* add data server list api

* add server list api

* remove log

* fix isssue 21

* add query by id function

* fix issue 22

* delay client off process and sync data process to working status

* fix data connet meta error

* fix inject NotifyDataSyncHandler

* fix start log

* add send sub log

* fix subscriber to send log

* fix word cache clientid

* add clientoff delay time

* fix clientOffDelayMs

* fix jetty version

* fix version to 5.2.1 release

* fix version

* fix .travis.yml

* fix test case

* fix

* fix test sync case

* fix test case

* fix test case

* fix case

* fix notify online no connect break,and add connect log

* add test case

* add test case

* fix test case

* fix format

* fix resource test case

* fix
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case

* fix jetty version,and fix rest api for dataInfoIds

* fix hashcode test

* fix working to init bug

* fix start task log

* fix Watcher can't get providate data,retry and finally return new

* add data server list api

* add server list api

* remove log

* fix isssue 21

* add query by id function

* fix issue 22

* delay client off process and sync data process to working status

* fix data connet meta error

* fix inject NotifyDataSyncHandler

* fix start log

* add send sub log

* fix subscriber to send log

* fix word cache clientid

* add clientoff delay time

* fix clientOffDelayMs

* fix jetty version

* fix version to 5.2.1 release

* fix version to 5.2.1 for release

* fix version

* fix .travis.yml

* fix test case

* fix

* fix test sync case

* fix test case

* fix test case

* fix case

* fix notify online no connect break,and add connect log

* add test case

* add test case

* fix test case

* fix format

* fix resource test case

* fix

* remove data numberOfReplicas

* fix jraft version 1.2.5
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case

* fix jetty version,and fix rest api for dataInfoIds

* fix hashcode test

* fix working to init bug

* fix start task log

* fix Watcher can't get providate data,retry and finally return new

* add data server list api

* add server list api

* remove log

* fix isssue 21

* add query by id function

* fix issue 22

* delay client off process and sync data process to working status

* fix data connet meta error

* fix inject NotifyDataSyncHandler

* fix start log

* add send sub log

* fix subscriber to send log

* bugfix: sofastack#27

* bugfix: sofastack#27

* feature: Add monitoring logs sofastack#29

* feature: Add monitoring logs sofastack#29
(1) bugfix CommonResponse
(2) format

* bugfix: During meta startup, leader may not register itself sofastack#30

* bugfix: Sometimes receive "Not leader" response from leader in OnStartingFollowing() sofastack#31

* temp add

* add renew request

* data snapshot module

* add calculate digest service

* fix word cache clientid

* data renew module

* data renew/expired module

* add renew datuem request

* add WriteDataAcceptor

* session renew/expired module

* 1. bugfix ReNewDatumHandler: getByConnectId -> getOwnByConnectId
2. reactor DatumCache from static to instance

* add blacklist wrapper and filter

* upgrade jraft version to 1.2.5

* blacklist ut

* add clientoff delay time

* bugfix: The timing of snapshot construction is not right

* rename: ReNew -> Renew

* fix blacklist test case

* rename: unpub -> unPub

* add threadSize and queueSize limit

* bugfix: revert SessionRegistry

* fix sub fetch retry all error,and reset datainfoid version

* fix client fast chain breakage data can not be cleaned up”

* (1) remove logback.xml DEBUG level;
(2) dataServerBootstrapConfig rename;
(3) print conf when startup

* update log

* fix update zero version,and fix log

* add clientOffDelayMs default value

* fix clientOffDelayMs

* Task(DatumSnapshot/Pub/UnPub) add retry strategy

* bugfix DataNodeServiceImpl: retryTimes

* (1)cancelDataTaskListener duplicate
(2)bugfix DataNodeServiceImpl and SessionRegistry

* refactor datum version

* add hessian black list

* bugfix: log "retryTimes"

* bugfix DatumLeaseManager:  Consider the situation of connectId lose after data restart; ownConnectId should calculate dynamically

* add jvm blacklist api

* fix file name

* some code optimization

* data:refactor snapshot

* fix jetty version

* bugfix DatumLeaseManager: If in a non-working state, cannot clean up because the renew request cannot be received at this time.

* remove SessionSerialFilterResource

* WriteDataProcessor add TaskEvent log; Cache print task update

* data bugfix: snapshot must notify session

* fix SubscriberPushEmptyTask default implement

* merge new

* fix protect

* 1. When the pub of connectId is 0, no clearance action is triggered.
2. Print map. size regularly
3. Delete the log: "ConnectId (% s) expired, lastRenewTime is% s, pub. size is 0"

* DataNodeExchanger: print but ignore if from renew module, cause renew request is too much

* reduce log of renew

* data bugfix: Data coverage is also allowed when versions are equal. Consistent with session design.

* DatumCache bugfix: Index coverage should be updated after pubMap update

* DatumSnapshotHandler: limit print; do not call dataChangeEventCenter.onChange if no diff

* bugfix unpub npe (pub maybe already clean by DatumLeaseManager);LIMITED_LIST_SIZE_FOR_PRINT change to 30

* some code refactor

* add code comment

* fix data working to init,and fix empty push version

* consider unpub is isWriteRequest, Reduce Snapshot frequency

* RefreshUpdateTime is at the top, otherwise multiple snapshot can be issued concurrently

* update config: reduce retryTimes, increase delayTime, the purpose is to reduce performance consumption

* put resume() in finally code block, avoid lock leak

* modify renewDatumWheelTaskDelay and datumTimeToLiveSec

* When session receives a connection and generates renew tasks, it randomly delays different times to avoid everyone launching renew at the same time.

* data: add executor for handler
session: bugfix snapshot
session: refactor wheelTimer of renew to add executor

* add get data log

* snapshot and lastUpdateTimestamp: Specific to dataServerIP

* 1. DataServer: RenewDatumHandler must return GenericResponse but not CommonResponse, or else session will class cast exception
2. No need to update timestamp after renew
3. snapshot: Need to specify DataServerIP

* add logs

* 1. dataServer: reduce log of snapshotHandler
2. update logs

* dataServer: renew logic should delay for some time after status is WORKING, cause Data is processed asynchronously after synchronization from other DataServer

* bugfix bean; update log

* ignore renew request log

* fix UT

* fix .travis.yml

* fix version 5.3.0-SNAPSHOT

* fix online notify connect error

* fix push confirm error,and fix datum update version,pub threadpool config,add accesslimit service

* add switch renew and expire

* implement renew enable/disable switch

* fix data client exechange log

* fix datum fetch connect error

* bugfix CacheService: set version zero when first sub and get datum error

* fix clean task for fetch

* bugfix DatumCache: Forget to clean up the index in datumCache.putSnapshot

* fix fetch datum word cache

* fix test case time

* fix test cast

* fix test case

* fix tast case

* fix ut case: StopPushDataSwitchTest

* ut case:renew module

* fix ut case:TempPublisherTest

* bugfix ut case: increase sleep time

* fix ut case:RenewTest

* fix ut case:RenewTest format

* fix pom version

* fix ut case:do not run parallelly
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case

* fix jetty version,and fix rest api for dataInfoIds

* fix hashcode test

* fix working to init bug

* fix start task log

* fix Watcher can't get providate data,retry and finally return new

* add data server list api

* add server list api

* remove log

* fix isssue 21

* add query by id function

* fix issue 22

* delay client off process and sync data process to working status

* fix data connet meta error

* fix inject NotifyDataSyncHandler

* fix start log

* add send sub log

* fix subscriber to send log

* bugfix: sofastack#27

* bugfix: sofastack#27

* feature: Add monitoring logs sofastack#29

* feature: Add monitoring logs sofastack#29
(1) bugfix CommonResponse
(2) format

* bugfix: During meta startup, leader may not register itself sofastack#30

* bugfix: Sometimes receive "Not leader" response from leader in OnStartingFollowing() sofastack#31

* temp add

* add renew request

* data snapshot module

* add calculate digest service

* fix word cache clientid

* data renew module

* data renew/expired module

* add renew datuem request

* add WriteDataAcceptor

* session renew/expired module

* 1. bugfix ReNewDatumHandler: getByConnectId -> getOwnByConnectId
2. reactor DatumCache from static to instance

* add blacklist wrapper and filter

* upgrade jraft version to 1.2.5

* blacklist ut

* add clientoff delay time

* bugfix: The timing of snapshot construction is not right

* rename: ReNew -> Renew

* fix blacklist test case

* rename: unpub -> unPub

* add threadSize and queueSize limit

* bugfix: revert SessionRegistry

* fix sub fetch retry all error,and reset datainfoid version

* fix client fast chain breakage data can not be cleaned up”

* (1) remove logback.xml DEBUG level;
(2) dataServerBootstrapConfig rename;
(3) print conf when startup

* update log

* fix update zero version,and fix log

* add clientOffDelayMs default value

* fix clientOffDelayMs

* Task(DatumSnapshot/Pub/UnPub) add retry strategy

* bugfix DataNodeServiceImpl: retryTimes

* (1)cancelDataTaskListener duplicate
(2)bugfix DataNodeServiceImpl and SessionRegistry

* refactor datum version

* add hessian black list

* bugfix: log "retryTimes"

* bugfix DatumLeaseManager:  Consider the situation of connectId lose after data restart; ownConnectId should calculate dynamically

* add jvm blacklist api

* fix file name

* some code optimization

* data:refactor snapshot

* fix jetty version

* bugfix DatumLeaseManager: If in a non-working state, cannot clean up because the renew request cannot be received at this time.

* remove SessionSerialFilterResource

* WriteDataProcessor add TaskEvent log; Cache print task update

* data bugfix: snapshot must notify session

* fix SubscriberPushEmptyTask default implement

* merge new

* fix protect

* 1. When the pub of connectId is 0, no clearance action is triggered.
2. Print map. size regularly
3. Delete the log: "ConnectId (% s) expired, lastRenewTime is% s, pub. size is 0"

* DataNodeExchanger: print but ignore if from renew module, cause renew request is too much

* reduce log of renew

* data bugfix: Data coverage is also allowed when versions are equal. Consistent with session design.

* DatumCache bugfix: Index coverage should be updated after pubMap update

* DatumSnapshotHandler: limit print; do not call dataChangeEventCenter.onChange if no diff

* bugfix unpub npe (pub maybe already clean by DatumLeaseManager);LIMITED_LIST_SIZE_FOR_PRINT change to 30

* some code refactor

* add code comment

* fix data working to init,and fix empty push version

* consider unpub is isWriteRequest, Reduce Snapshot frequency

* RefreshUpdateTime is at the top, otherwise multiple snapshot can be issued concurrently

* update config: reduce retryTimes, increase delayTime, the purpose is to reduce performance consumption

* put resume() in finally code block, avoid lock leak

* modify renewDatumWheelTaskDelay and datumTimeToLiveSec

* When session receives a connection and generates renew tasks, it randomly delays different times to avoid everyone launching renew at the same time.

* data: add executor for handler
session: bugfix snapshot
session: refactor wheelTimer of renew to add executor

* add get data log

* snapshot and lastUpdateTimestamp: Specific to dataServerIP

* 1. DataServer: RenewDatumHandler must return GenericResponse but not CommonResponse, or else session will class cast exception
2. No need to update timestamp after renew
3. snapshot: Need to specify DataServerIP

* add logs

* 1. dataServer: reduce log of snapshotHandler
2. update logs

* dataServer: renew logic should delay for some time after status is WORKING, cause Data is processed asynchronously after synchronization from other DataServer

* bugfix bean; update log

* ignore renew request log

* fix UT

* fix .travis.yml

* fix version 5.3.0-SNAPSHOT

* fix online notify connect error

* fix push confirm error,and fix datum update version,pub threadpool config,add accesslimit service

* fix push confirm error,and fix datum update version,pub threadpool config,add accesslimit service (sofastack#45)

* add switch renew and expire

* implement renew enable/disable switch

* fix data client exechange log

* fix datum fetch connect error

* bugfix CacheService: set version zero when first sub and get datum error

* fix clean task for fetch

* bugfix DatumCache: Forget to clean up the index in datumCache.putSnapshot

* Session&Data increase WordCache use

* code optimize

* WordCache: registerId do not add WordCache

* fix fetch datum word cache

* fix NotifyFetchDatumHandler npe

* fix test case time

* fix test cast

* fix test case

* fix tast case

* fix ut case: StopPushDataSwitchTest

* ut case:renew module

* fix ut case:TempPublisherTest

* fix version,and merge new

* bugfix ut case: increase sleep time

* fix ut case:RenewTest

* fix version and fix callback executor,fix log error

* fix ut case:RenewTest format

* fix pom version

* fix ut case:do not run parallelly

* refactor providerdata process

* Memory optimization:Datum.processDatum

* add session notify test

* copy from mybank:
1. Update Subscriber: support for push context
2. increase queueSize of checkPushExecutor
3. fix the isolation function of Gzone and Rzone

* Modify the deny policy of accessDataExecutor of SessionServer

* remove useless code

* fix call back

* fix meta methodhandle cache

* fix push confirm success

* Change the communication between session and data to multi connection

* resolve compile error

* fix processor

* BoltClient: the creation of ConnectionEventAdapter should be inheritable

* fix currentTimeMillis product from source

* add client Invalid check task

* use multiple RpcClient instances instead of one RpcClient with multiple connections,and start a heartbeat thread to ensure connection pool because bolt does not maintain the number of connection pools

* refactor TaskListener and use map instead of list in DefaultTaskListenerManager; refactor getSingleTaskDispatcher()

* DataChangeRequestHandler:optimize performance

* refactor: Heartbeat between session and data

* fix: Synex-wh#20 (review)

* update

* BoltClient use just one RpcClient;
remove heartbeat between session and data;

* SyncDataCallback reduce ThreadSize for saving cpu

* reduce NOTIFY_SESSION_CALLBACK_EXECUTOR threadSize

* fix version in DataChangeFetchTask

* 1. filter out the unPubs of datum when first put, Otherwise, "syncData" or "fetchData" get Datum may contains unPubs, which will result something error
2. add jul-to-slf4j for some lib which use jul log, e.g. hessian

* fix meta mem

* fix test case

* fix temp case

* fix syncConfigRetryInterval 60s

* fix format

Co-authored-by: wukezhu <atell@qq.com>
* fix temp push

* update version 5.2.1-SNAPSHOT

* fix test case

* fix jetty version,and fix rest api for dataInfoIds

* fix hashcode test

* fix working to init bug

* fix start task log

* fix Watcher can't get providate data,retry and finally return new

* add data server list api

* add server list api

* remove log

* fix isssue 21

* add query by id function

* fix issue 22

* delay client off process and sync data process to working status

* fix data connet meta error

* fix inject NotifyDataSyncHandler

* fix start log

* add send sub log

* fix subscriber to send log

* bugfix: sofastack#27

* bugfix: sofastack#27

* feature: Add monitoring logs sofastack#29

* feature: Add monitoring logs sofastack#29
(1) bugfix CommonResponse
(2) format

* bugfix: During meta startup, leader may not register itself sofastack#30

* bugfix: Sometimes receive "Not leader" response from leader in OnStartingFollowing() sofastack#31

* temp add

* add renew request

* data snapshot module

* add calculate digest service

* fix word cache clientid

* data renew module

* data renew/expired module

* add renew datuem request

* add WriteDataAcceptor

* session renew/expired module

* 1. bugfix ReNewDatumHandler: getByConnectId -> getOwnByConnectId
2. reactor DatumCache from static to instance

* add blacklist wrapper and filter

* upgrade jraft version to 1.2.5

* blacklist ut

* add clientoff delay time

* bugfix: The timing of snapshot construction is not right

* rename: ReNew -> Renew

* fix blacklist test case

* rename: unpub -> unPub

* add threadSize and queueSize limit

* bugfix: revert SessionRegistry

* fix sub fetch retry all error,and reset datainfoid version

* fix client fast chain breakage data can not be cleaned up”

* (1) remove logback.xml DEBUG level;
(2) dataServerBootstrapConfig rename;
(3) print conf when startup

* update log

* fix update zero version,and fix log

* add clientOffDelayMs default value

* fix clientOffDelayMs

* Task(DatumSnapshot/Pub/UnPub) add retry strategy

* bugfix DataNodeServiceImpl: retryTimes

* (1)cancelDataTaskListener duplicate
(2)bugfix DataNodeServiceImpl and SessionRegistry

* refactor datum version

* add hessian black list

* bugfix: log "retryTimes"

* bugfix DatumLeaseManager:  Consider the situation of connectId lose after data restart; ownConnectId should calculate dynamically

* add jvm blacklist api

* fix file name

* some code optimization

* data:refactor snapshot

* fix jetty version

* bugfix DatumLeaseManager: If in a non-working state, cannot clean up because the renew request cannot be received at this time.

* remove SessionSerialFilterResource

* WriteDataProcessor add TaskEvent log; Cache print task update

* data bugfix: snapshot must notify session

* fix SubscriberPushEmptyTask default implement

* merge new

* fix protect

* 1. When the pub of connectId is 0, no clearance action is triggered.
2. Print map. size regularly
3. Delete the log: "ConnectId (% s) expired, lastRenewTime is% s, pub. size is 0"

* DataNodeExchanger: print but ignore if from renew module, cause renew request is too much

* reduce log of renew

* data bugfix: Data coverage is also allowed when versions are equal. Consistent with session design.

* DatumCache bugfix: Index coverage should be updated after pubMap update

* DatumSnapshotHandler: limit print; do not call dataChangeEventCenter.onChange if no diff

* bugfix unpub npe (pub maybe already clean by DatumLeaseManager);LIMITED_LIST_SIZE_FOR_PRINT change to 30

* some code refactor

* add code comment

* fix data working to init,and fix empty push version

* consider unpub is isWriteRequest, Reduce Snapshot frequency

* RefreshUpdateTime is at the top, otherwise multiple snapshot can be issued concurrently

* update config: reduce retryTimes, increase delayTime, the purpose is to reduce performance consumption

* put resume() in finally code block, avoid lock leak

* modify renewDatumWheelTaskDelay and datumTimeToLiveSec

* When session receives a connection and generates renew tasks, it randomly delays different times to avoid everyone launching renew at the same time.

* data: add executor for handler
session: bugfix snapshot
session: refactor wheelTimer of renew to add executor

* add get data log

* snapshot and lastUpdateTimestamp: Specific to dataServerIP

* 1. DataServer: RenewDatumHandler must return GenericResponse but not CommonResponse, or else session will class cast exception
2. No need to update timestamp after renew
3. snapshot: Need to specify DataServerIP

* add logs

* 1. dataServer: reduce log of snapshotHandler
2. update logs

* dataServer: renew logic should delay for some time after status is WORKING, cause Data is processed asynchronously after synchronization from other DataServer

* bugfix bean; update log

* ignore renew request log

* fix UT

* fix .travis.yml

* fix version 5.3.0-SNAPSHOT

* fix online notify connect error

* fix push confirm error,and fix datum update version,pub threadpool config,add accesslimit service

* fix push confirm error,and fix datum update version,pub threadpool config,add accesslimit service (sofastack#45)

* add switch renew and expire

* implement renew enable/disable switch

* fix data client exechange log

* fix datum fetch connect error

* bugfix CacheService: set version zero when first sub and get datum error

* fix clean task for fetch

* bugfix DatumCache: Forget to clean up the index in datumCache.putSnapshot

* Session&Data increase WordCache use

* code optimize

* WordCache: registerId do not add WordCache

* fix fetch datum word cache

* fix NotifyFetchDatumHandler npe

* fix test case time

* fix test cast

* fix test case

* fix tast case

* fix ut case: StopPushDataSwitchTest

* ut case:renew module

* fix ut case:TempPublisherTest

* fix version,and merge new

* bugfix ut case: increase sleep time

* fix ut case:RenewTest

* fix version and fix callback executor,fix log error

* fix ut case:RenewTest format

* fix pom version

* fix ut case:do not run parallelly

* refactor providerdata process

* Memory optimization:Datum.processDatum

* add session notify test

* copy from mybank:
1. Update Subscriber: support for push context
2. increase queueSize of checkPushExecutor
3. fix the isolation function of Gzone and Rzone

* Modify the deny policy of accessDataExecutor of SessionServer

* remove useless code

* fix call back

* fix meta methodhandle cache

* fix push confirm success

* Change the communication between session and data to multi connection

* resolve compile error

* fix processor

* BoltClient: the creation of ConnectionEventAdapter should be inheritable

* fix currentTimeMillis product from source

* add client Invalid check task

* use multiple RpcClient instances instead of one RpcClient with multiple connections,and start a heartbeat thread to ensure connection pool because bolt does not maintain the number of connection pools

* refactor TaskListener and use map instead of list in DefaultTaskListenerManager; refactor getSingleTaskDispatcher()

* DataChangeRequestHandler:optimize performance

* refactor: Heartbeat between session and data

* fix: Synex-wh#20 (review)

* update

* BoltClient use just one RpcClient;
remove heartbeat between session and data;

* SyncDataCallback reduce ThreadSize for saving cpu

* reduce NOTIFY_SESSION_CALLBACK_EXECUTOR threadSize

* 1. filter out the unPubs of datum when first put, Otherwise, "syncData" or "fetchData" get Datum may contains unPubs, which will result something error
2. add jul-to-slf4j for some lib which use jul log, e.g. hessian

* update for idc sync:
1. add a interface DatumStorage and implemented by LocalDatumStorage
2. remove Sync from BackUpNotifier
3. add RemoteDataServerChangeEvent

* 1. NotifyProvideDataChange support multiple nodeTypes
2. refactor provideData code of DataServer, just like SessionServer
3. remove GetChangeListRequestHandler to enterprise version because it's about multiple data centers

* use getClientRegion() instead of getSessionServerRegion() for push

* bugfix LocalDatumStorage#getVersions

* bugfix DataDigestResource api

* bugfix DataDigestResource api

* fix BoltClient: remove unnecessary code

* give more thread for getOtherDataCenterNodeAndUpdate, because otherwise it would rejected if too much task

* grefresh for keep connect other dataServers: should use dataServerCache but not DataServerNodeFactory

* revert "delay cache invalid in DataChangeFetchTask&DataChangeFetchCloudTask",because if the old datum is not invalid, the new subscriber will get the old datum directly from the cache

* bugfix MetaStoreService&DataStoreService: "return" -> "continue"

* fix Memory waste of ServerDataBox

* revert MetaDigestResource api

* Request add method "getTimeout"

* bugfix: remove @ConditionalOnMissingBean for fetchDataHandler

* fix compile error

* RequestException: limit message size

* bugfix: empty dataServerList cause NPE because calculateOldConsistentHash return null

* trigger github ci

* trigger github ci

* fix ut

* update version to 5.4.0

Co-authored-by: Synex-wh <241809311@qq.com>
* fix npe in DataServerChangeEventHandler

* bugfix in cloud mode: If obtained datum from DataServer failed, should set sessionInterests version zero

* add dataNodeExchangeForFetchDatumTimeOut; add log for fetchChangDataProcess

* update ut
* update version to 5.4.1

* update ut
* fix localZone in DataChangeFetchTask, should use clientCell but not sessionServerRegion

* update version to 5.4.2

Co-authored-by: kezhu.wukz <kezhu.wukz@alipay.com>
* chore: update version to 5.4.3-SNAPSHOT

* chore: update hessian to 3.3.8
* Fix windows pacakge error
…not set (sofastack#124)

Co-authored-by: kezhu.wukz <kezhu.wukz@alipay.com>
* chore: update version to 5.4.3-SNAPSHOT

* fix: when client re-connect happened before client-off, session will have dirty connectIndex
Co-authored-by: kezhu.wukz <kezhu.wukz@alipay.com>
* chore: update version to 5.4.3-SNAPSHOT

* fix: duplicated connectId

* fix: duplicated connectId and format

* fix: testcase

* fix: testcase

* fix: testcase

* fix: testcase
Bumps [netty-all](https://github.com/netty/netty) from 4.1.25.Final to 4.1.42.Final.
- [Release notes](https://github.com/netty/netty/releases)
- [Commits](netty/netty@netty-4.1.25.Final...netty-4.1.42.Final)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
dzdx and others added 6 commits August 23, 2020 14:17
* upgrade version to 5.4.4-SNAPSHOT

* update version 5.4.4

* deplicate conn id

* fix snapshot backup

* watcher connectid four tuple
* upgrade version to 5.4.4-SNAPSHOT

* clean log

Co-authored-by: dzdx <dzidaxie@gmail.com>
* upgrade version to 5.4.5

* bugfix: repub will remove other publishers in same connectID

* split meta thread pool to improve stability

* configurable electiontimeout

* add admin api to implement session connections loadbalance

* session heartbeat timeout

* upgrade jraft to 1.3.5.Alpha1
when execute command 'mvn clean install ', the fllowing warnings show
up:

[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-client-api:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 57,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 43,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 50,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-client-log:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 62,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 48,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 55,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-client-impl:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 110,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 96,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 103,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-client-parent:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 105,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 91,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 98,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-server-session:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.springframework.boot:spring-boot-maven-plugin is missing. @ line 91,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-server-data:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.springframework.boot:spring-boot-maven-plugin is missing. @ line 95,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-server-meta:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.springframework.boot:spring-boot-maven-plugin is missing. @ line
102, column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-server-integration:jar:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.springframework.boot:spring-boot-maven-plugin is missing. @ line 45,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-distribution-session:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 43,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 29,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 36,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-distribution-data:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 43,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 29,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 36,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-distribution-meta:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 43,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 29,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 36,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-distribution-integration:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 42,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 28,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 35,
column 21
[WARNING]
[WARNING] Some problems were encountered while building the effective
model for com.alipay.sofa:registry-distribution:pom:5.4.5
[WARNING] 'build.plugins.plugin.version' for
org.sonatype.plugins:nexus-staging-maven-plugin is missing. @ line 47,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-install-plugin is missing. @ line 33,
column 21
[WARNING] 'build.plugins.plugin.version' for
org.apache.maven.plugins:maven-deploy-plugin is missing. @ line 40,
column 21
[WARNING]
[WARNING] It is highly recommended to fix these problems because they
threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support
building such malformed projects.
[WARNING]
dzdx and others added 3 commits December 2, 2020 15:04
* default disable drop connections

* lint

* start check client version cron

* don't use bolt-default-executor (sofastack#151)
* Create maven.yml

* Update maven.yml

* Update README.md

* Update README.md

* Update README.md

remove personal NickNYU

* Update maven.yml

Update README.md

* split executor in session and data (sofastack#152)

* default disable drop connections

* lint

* start check client version cron

* don't use bolt-default-executor (sofastack#151)

Update README.md

Update README.md

remove personal NickNYU

Co-authored-by: dzdx <dzidaxie@gmail.com>
Co-authored-by: 忘禅 <zhuchen.zhu@alibaba-inc.com>
@codecov
Copy link

codecov bot commented Jun 15, 2021

Codecov Report

Merging #146 (313cc24) into master (38bae7d) will decrease coverage by 0.58%.
The diff coverage is n/a.

❗ Current head 313cc24 differs from pull request most recent head a996cbb. Consider uploading reports for the commit a996cbb to get more accurate results
Impacted file tree graph

@@             Coverage Diff              @@
##             master     #146      +/-   ##
============================================
- Coverage     62.68%   62.09%   -0.59%     
  Complexity       44       44              
============================================
  Files           437      437              
  Lines         15802    15802              
  Branches       1502     1502              
============================================
- Hits           9905     9812      -93     
- Misses         4881     4974      +93     
  Partials       1016     1016              
Impacted Files Coverage Δ
...y/server/data/bootstrap/DataServerInitializer.java 58.33% <0.00%> (-25.01%) ⬇️
...try/jraft/handler/RaftClientConnectionHandler.java 36.84% <0.00%> (-21.06%) ⬇️
...rver/data/change/notify/SessionServerNotifier.java 42.85% <0.00%> (-17.59%) ⬇️
...ipay/sofa/registry/server/data/util/DelayItem.java 52.17% <0.00%> (-17.40%) ⬇️
...er/session/bootstrap/SessionServerInitializer.java 57.14% <0.00%> (-14.29%) ⬇️
...ting/sessionserver/disconnect/DisconnectEvent.java 66.66% <0.00%> (-12.50%) ⬇️
...sofa/registry/server/data/change/SnapshotData.java 90.00% <0.00%> (-10.00%) ⬇️
...rver/session/node/service/DataNodeServiceImpl.java 60.55% <0.00%> (-7.23%) ⬇️
...a/registry/task/scheduler/TimedSupervisorTask.java 65.51% <0.00%> (-6.90%) ⬇️
...a/registry/server/meta/remoting/RaftExchanger.java 58.37% <0.00%> (-4.98%) ⬇️
... and 13 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update abd97d1...a996cbb. Read the comment docs.

@nobodyiam
Copy link
Member

@NickNYU please help to review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants