概述
I have my Spring Hibernate web application running on MySQL that gives me trouble.
I have searched around and tried different configurations, read quite a few threads on this website, but it still pops up its smiling head.
The error message is:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 63,313,144 milliseconds ago. The last packet sent successfully to the server was 63,313,144 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 63,313,144 milliseconds ago. The last packet sent successfully to the server was 63,313,144 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:408)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1137)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3965)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2578)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2758)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2820)
at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:5359)
at net.sf.log4jdbc.ConnectionSpy.setAutoCommit(ConnectionSpy.java:764)
at com.mchange.v2.c3p0.impl.NewProxyConnection.setAutoCommit(NewProxyConnection.java:912)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:72)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:162)
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1435)
at org.hibernate.jpa.internal.TransactionImpl.begin(TransactionImpl.java:61)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:159)
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:380)
... 46 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3946)
... 58 more
The MySQL wait_timeout value is 28800.
My data source, c3p0 and Hibernate configuration is:
@Bean
public DataSource dataSource() throws PropertyVetoException {
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setDriverClass(databaseProperties.getHibernateDriverClassName());
dataSource.setJdbcUrl(databaseProperties.getDataSourceUrl());
dataSource.setUser(databaseProperties.getDataSourceUsername());
dataSource.setPassword(databaseProperties.getDataSourcePassword());
dataSource.setAcquireIncrement(5);
dataSource.setMaxStatementsPerConnection(20);
dataSource.setMaxStatements(100);
dataSource.setMinPoolSize(2);
dataSource.setMaxPoolSize(5);
return dataSource;
}
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() throws PropertyVetoException {
HibernateJpaVendorAdapter jpaVendorAdapter = new HibernateJpaVendorAdapter();
jpaVendorAdapter.setDatabasePlatform(databaseProperties.getHibernateDialect());
jpaVendorAdapter.setShowSql(true);
jpaVendorAdapter.setGenerateDdl(false);
Map jpaPropertiesMap = new HashMap();
jpaPropertiesMap.put("hibernate.dialect", databaseProperties.getHibernateDialect());
jpaPropertiesMap.put("hibernate.show_sql", "true");
jpaPropertiesMap.put("hibernate.format_sql", "true");
jpaPropertiesMap.put("hibernate.hbm2ddl.auto", databaseProperties.getHibernateHbm2ddlAuto());
jpaPropertiesMap.put("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory");
jpaPropertiesMap.put("hibernate.ejb.naming_strategy", "org.hibernate.cfg.ImprovedNamingStrategy");
jpaPropertiesMap.put("hibernate.c3p0.min_size", "5");
jpaPropertiesMap.put("hibernate.c3p0.max_size", "20");
jpaPropertiesMap.put("hibernate.c3p0.timeout", "1000");
jpaPropertiesMap.put("c3p0.maxConnectionAge", "7200");
jpaPropertiesMap.put("c3p0.maxIdleTime", "7200");
jpaPropertiesMap.put("c3p0.unreturnedConnectionTimeout", "60");
jpaPropertiesMap.put("c3p0.debugUnreturnedConnectionStackTraces", "true");
jpaPropertiesMap.put("hibernate.c3p0.max_statements", "50");
// Prevent JPA from converting the dates to the UTC time zone
jpaPropertiesMap.put("jadira.usertype.autoRegisterUserTypes", "true");
jpaPropertiesMap.put("jadira.usertype.databaseZone", "jvm");
jpaPropertiesMap.put("jadira.usertype.javaZone", "jvm");
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean();
factoryBean.setJpaVendorAdapter(jpaVendorAdapter);
factoryBean.setPackagesToScan("com.nsn.nitro.project.data.jpa.domain");
factoryBean.setJpaPropertyMap(jpaPropertiesMap);
String[] mappingsResources = new String[] {"custom/typedef.hbm.xml"};
factoryBean.setMappingResources(mappingsResources);
factoryBean.setDataSource(dataSource());
return factoryBean;
}
The error happens when the next morning I come back to the web application and it has not been accessed for the whole night.
I understand that MySQL wait_timeout is the number of seconds MySQL will wait for a connection to be used again before closing it down.
That means that my web application is trying to use a connection that has expired and been closed on MySQL side, with my web application still thinking it is a valid connection.
I suppose I should then make my web application time out connections before MySQL does. This way, the web application would not reuse any connection already timed out and closed on MySQL side, since the connection would have already been timed out on the web application side.
It feels like all my c3p0 configuration to that effect of timing out the unused connection is not doing its job.
I'm using the following stack:
MySQL mysql-5.6.14
mysql-connector-java 5.1.32
Spring 4.1.0.RELEASE
spring-data-jpa 1.6.2.RELEASE
Hibernate 4.3.6.Final
hibernate-jpa-2.1-api 1.0.0.Final
C3P0 0.9.2.1
What am I doing wrong in my configuration ?
Or am I supposed to explicitly close connections ?
Here is how I set up the repositories:
public interface LanguageRepository extends GenericRepository {
}
@Repository
@Transactional
public class GenericRepositoryImpl extends SimpleJpaRepository implements GenericRepository {
private EntityManager entityManager;
public GenericRepositoryImpl(JpaEntityInformation entityMetadata, EntityManager entityManager) {
super(entityMetadata, entityManager);
this.entityManager = entityManager;
}
public GenericRepositoryImpl(Class domainClass, EntityManager entityManager) {
super(domainClass, entityManager);
this.entityManager = entityManager;
}
public EntityManager getEntityManager() {
return entityManager;
}
@Override
@Transactional
public T deleteById(ID id) throws EntityNotFoundException {
T entity = findOne(id);
if (entity != null) {
delete(entity);
} else {
throw new EntityNotFoundException("The entity could not be found and was not deleted");
}
return entity;
}
}
public class GenericRepositoryFactoryBean, T, I extends Serializable> extends JpaRepositoryFactoryBean {
protected RepositoryFactorySupport createRepositoryFactory(EntityManager entityManager) {
return new BaseRepositoryFactory(entityManager);
}
protected static class BaseRepositoryFactory extends JpaRepositoryFactory {
private EntityManager entityManager;
public BaseRepositoryFactory(EntityManager entityManager) {
super(entityManager);
this.entityManager = entityManager;
}
@Override
protected Object getTargetRepository(RepositoryMetadata metadata) {
return new GenericRepositoryImpl((JpaEntityInformation) getEntityInformation(metadata.getDomainType()), entityManager);
}
@Override
protected Class> getRepositoryBaseClass(RepositoryMetadata metadata) {
return GenericRepositoryImpl.class;
}
}
}
@NoRepositoryBean
public interface GenericRepository extends JpaRepository {
public EntityManager getEntityManager();
public T deleteById(ID id) throws EntityNotFoundException;
}
I cannot see any close() method being implemented nor called in there. Something missing in my code ?
EDIT: Added logging for C3P0. Here is what is output:
2014-10-17 14:29:00,464 INFO [AbstractPoolBackedDataSource] Initializing c3p0 pool... com.mchange.
v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 5, acquireRetryAttempts -> 30, acquireRetryDelay
-> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false,
checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchan
ge.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> agvw3s958cggbnis1syx|1acb901, debugUnretu
rnedConnectionStackTraces -> false, description -> null, driverClass -> net.sf.log4jdbc.DriverSpy, f
actoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> agvw3s958c
ggbnis1syx|1acb901, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:log4jdbc:my
sql://127.0.0.1:3306/nitroproject, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTim
e -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 5, maxStatements -> 100, maxStatementsPerC
onnection -> 20, minPoolSize -> 2, numHelperThreads -> 3, preferredTestQuery -> null, properties ->
{user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, test
ConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, u
serOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
2014-10-17 14:29:00,479 DEBUG [BasicResourcePool] incremented pending_acquires: 1
2014-10-17 14:29:00,480 DEBUG [BasicResourcePool] Starting acquisition series. Incremented pending_
acquires [1], attempts_remaining: 30
2014-10-17 14:29:00,480 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchr
onousRunner@11dbedc: Adding task to queue -- com.mchange.v2.resourcepool.BasicResourcePool$Scattered
AcquireTask@1dd75ae
2014-10-17 14:29:00,481 DEBUG [BasicResourcePool] incremented pending_acquires: 2
2014-10-17 14:29:00,481 DEBUG [BasicResourcePool] Starting acquisition series. Incremented pending_
acquires [2], attempts_remaining: 30
2014-10-17 14:29:00,482 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchr
onousRunner@11dbedc: Adding task to queue -- com.mchange.v2.resourcepool.BasicResourcePool$Scattered
AcquireTask@15083c7
2014-10-17 14:29:00,482 DEBUG [BasicResourcePool] incremented pending_acquires: 3
2014-10-17 14:29:00,483 DEBUG [BasicResourcePool] Starting acquisition series. Incremented pending_
acquires [3], attempts_remaining: 30
2014-10-17 14:29:00,483 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchr
onousRunner@11dbedc: Adding task to queue -- com.mchange.v2.resourcepool.BasicResourcePool$Scattered
AcquireTask@fbbf1d
2014-10-17 14:29:00,511 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:00,523 DEBUG [C3P0PooledConnectionPool] com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager@e38aca.acquireResource() returning.
2014-10-17 14:29:00,523 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 [managed: 1, unused: 1, excluded: 0]
2014-10-17 14:29:00,523 DEBUG [BasicResourcePool] decremented pending_acquires: 2
2014-10-17 14:29:00,523 DEBUG [BasicResourcePool] Acquisition series terminated successfully. Decremented pending_acquires [2], attempts_remaining: 30
2014-10-17 14:29:00,521 DEBUG [BasicResourcePool] com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 config: [start -> 3; min -> 2; max -> 5; inc -> 5; num_acq_attempts -> 30; acq_attempt_delay -> 1000; check_idle_resources_delay -> 0; mox_resource_age -> 0; max_idle_time -> 0; excess_max_idle_time -> 0; destroy_unreturned_resc_time -> 0; expiration_enforcement_delay -> 0; break_on_acquisition_failure -> false; debug_store_checkout_exceptions -> false]
2014-10-17 14:29:00,523 DEBUG [C3P0PooledConnectionPoolManager] Created new pool for auth, username (masked): 'ni******'.
2014-10-17 14:29:00,523 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 [managed: 1, unused: 0, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@acde7c)
2014-10-17 14:29:00,524 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:00,525 DEBUG [C3P0PooledConnectionPool] com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager@e38aca.acquireResource() returning.
2014-10-17 14:29:00,525 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 [managed: 2, unused: 1, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@acde7c)
2014-10-17 14:29:00,525 DEBUG [BasicResourcePool] decremented pending_acquires: 1
2014-10-17 14:29:00,529 DEBUG [BasicResourcePool] Acquisition series terminated successfully. Decremented pending_acquires [1], attempts_remaining: 30
2014-10-17 14:29:00,525 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:00,530 DEBUG [C3P0PooledConnectionPool] com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager@e38aca.acquireResource() returning.
2014-10-17 14:29:00,530 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 [managed: 3, unused: 2, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@acde7c)
2014-10-17 14:29:00,530 DEBUG [BasicResourcePool] decremented pending_acquires: 0
2014-10-17 14:29:00,530 DEBUG [BasicResourcePool] Acquisition series terminated successfully. Decremented pending_acquires [0], attempts_remaining: 30
2014-10-17 14:29:00,562 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:00,574 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchronousRunner@11dbedc: Adding task to queue -- com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@1fec09e
2014-10-17 14:29:00,574 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:00,575 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 [managed: 3, unused: 2, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@acde7c)
2014-10-17 14:29:00,575 DEBUG [NewProxyConnection] com.mchange.v2.c3p0.impl.NewProxyConnection@39069f: close() called more than once.
2014-10-17 14:29:02,260 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePool@ad2e72 [managed: 3, unused: 2, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@acde7c)
2014-10-17 14:29:03,111 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:03,112 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchronousRunner@11dbedc: Adding task to queue -- com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@146219b
2014-10-17 14:29:03,112 DEBUG [GooGooStatementCache] checkinAll(): com.mchange.v2.c3p0.stmt.DoubleM
axStatementCache stats -- total size: 0; checked out: 0; num connections: 0; num keys: 0
2014-10-17 14:29:03,113 DEBUG [BasicResourcePool] trace com.mchange.v2.resourcepool.BasicResourcePo
ol@ad2e72 [managed: 3, unused: 2, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@acde7c)
2014-10-17 14:29:03,262 DEBUG [DefaultListableBeanFactory] Returning cached instance of singleton bean 'org.springframework.transaction.config.internalTransactionAdvisor'
2014-10-17 14:29:03,285 DEBUG [DefaultListableBeanFactory] Finished creating instance of bean 'entityManagerFactory'
And then it loops outputting this:
2014-10-17 14:34:10,399 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@70b40a -- Running DeadlockDetector[Exiting. No pending tasks.]
2014-10-17 14:34:10,825 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@15e34e2 -- Running DeadlockDetector[Exiting. No pending tasks.]
2014-10-17 14:34:10,825 DEBUG [ThreadPoolAsynchronousRunner] com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@15e34e2 -- Running DeadlockDetector[Exiting. No pending tasks.]
解决方案
I suppose the issue comes up because the database server kills the connection while the application connection pool still has a handle on it. By having a time out on the application connection pool shorter than on the database server side, the connection gets renewed by the application connection pool before it gets killed by the database server, thus avoiding the issue. My MySQL database server has a timeout wait of 28800 seconds and my application connection pool C3P0 has a timeout wait of 14400 seconds. It makes sense that the chain of timeout waits has to go shorter from servers to clients.
最后
以上就是爱笑翅膀为你收集整理的c3p0 服务启动获取连接超时_JDBC连接超时无法重新连接的全部内容,希望文章能够帮你解决c3p0 服务启动获取连接超时_JDBC连接超时无法重新连接所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复