在使用Jedis进行redis集群操作的时候,出现报错connection refused 和Could not get a resource from the pool代码如下:
@Test
public void testRedis(){
final String redisURL = "192.168.3.3";
final String auth = "pass";
final int port7001 = 5001;
final int port7002 = 5002;
final int port7003 = 5003;
final int port7004 = 5004;
final int port7005 = 5005;
final int port7006 = 5006;
final int MAX_IDLE = 200;
final int MAX_TOTAL = 1024;
final int CONN_TIME_OUT = 1000;//链接server超时
final int SO_TIME_OUT = 1000;//等待response超时
final int MAX_ATTEMPTS = 1;//尝试重新链接次数
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMaxTotal(MAX_TOTAL);
poolConfig.setMaxIdle(MAX_IDLE);
HostAndPort hostAndPort1 = new HostAndPort(redisURL,port7001);
HostAndPort hostAndPort2 = new HostAndPort(redisURL,port7002);
HostAndPort hostAndPort3 = new HostAndPort(redisURL,port7003);
HostAndPort hostAndPort4 = new HostAndPort(redisURL,port7004);
HostAndPort hostAndPort5 = new HostAndPort(redisURL,port7005);
HostAndPort hostAndPort6 = new HostAndPort(redisURL,port7006);
Set<HostAndPort> hostAndPortSet = new HashSet<>(6);
hostAndPortSet.add(hostAndPort1);
hostAndPortSet.add(hostAndPort2);
hostAndPortSet.add(hostAndPort3);
hostAndPortSet.add(hostAndPort4);
hostAndPortSet.add(hostAndPort5);
hostAndPortSet.add(hostAndPort6);
// public JedisCluster(Set<HostAndPort> jedisClusterNode, int connectionTimeout,
// int soTimeout, int maxAttempts, String password, GenericObjectPoolConfig poolConfig)
JedisCluster jedisCluster = new JedisCluster(hostAndPortSet,CONN_TIME_OUT,SO_TIME_OUT,MAX_ATTEMPTS,auth,poolConfig);
jedisCluster.set("a", "1o");
System.out.println(jedisCluster.get("a"));
//
// public JedisPool(GenericObjectPoolConfig poolConfig, String host, int port, int timeout, String password) {
// this(poolConfig, host, port, timeout, password, 0, (String)null);
// }
}
经过查找资料,最终猜测问题出在redis-cluster/redis**/nodes_**.conf,所以对此文件进行了修改
修改之前的文件内容如下:
30098db4933f13b02ef2a30a5070617f0ae58fd4 127.0.0.1:5006@15006 slave 0ee3bde15b2caf32df19ef85d874c228be8d9342 0 1513932380250 6 connected
692301fa80af9870bb258c34b553590fbced72ad 127.0.0.1:5001@15001 master - 0 1513932380000 1 connected 0-5460
164295b1736fb5fc244abf0a2e57774d938a7262 127.0.0.1:5002@15002 master - 0 1513932379545 2 connected 5461-10922
0ee3bde15b2caf32df19ef85d874c228be8d9342 127.0.0.1:5003@15003 myself,master - 0 1513932380000 3 connected 10923-16383
657967366913fff237e091320761ea7ef9544893 127.0.0.1:5004@15004 slave 692301fa80af9870bb258c34b553590fbced72ad 0 1513932380049 4 connected
13c976fada6a8537826b2585a5dabad6fd2a4ece 127.0.0.1:5005@15005 slave 164295b1736fb5fc244abf0a2e57774d938a7262 0 1513932380553 5 connected
修改后的内容如下:
30098db4933f13b02ef2a30a5070617f0ae58fd4 192.168.3.3:5006@15006 slave 0ee3bde15b2caf32df19ef85d874c228be8d9342 0 1513932380250 6 connected
692301fa80af9870bb258c34b553590fbced72ad 192.168.3.3:5001@15001 master - 0 1513932380000 1 connected 0-5460
164295b1736fb5fc244abf0a2e57774d938a7262 192.168.3.3:5002@15002 master - 0 1513932379545 2 connected 5461-10922
0ee3bde15b2caf32df19ef85d874c228be8d9342 192.168.3.3:5003@15003 myself,master - 0 1513932380000 3 connected 10923-16383
657967366913fff237e091320761ea7ef9544893 192.168.3.3:5004@15004 slave 692301fa80af9870bb258c34b553590fbced72ad 0 1513932380049 4 connected
13c976fada6a8537826b2585a5dabad6fd2a4ece 192.168.3.3:5005@15005 slave 164295b1736fb5fc244abf0a2e57774d938a7262 0 1513932380553 5 connected
资料来源:
PS:此博主最终的解决方法是重做集群,不过我并没有重做集群,只是修改了一下所有节点的nodes**_conf文件并重启,也可以解决问题,目前来看并没有什么不妥,如果有问题再回来修改 ×——×