概述
Redis API使用
redis-py 的API的使用可以分类为:
连接方式
连接池
操作
String 操作
Hash 操作
List 操作
Set 操作
Sort Set 操作
管道
发布订阅
连接方式
1、操作模式
redis-py提供两个类Redis和StrictRedis用于实现Redis的命令,StrictRedis用于实现大部分官方的命令,并使用官方的语法和命令,Redis是StrictRedis的子类,用于向后兼容旧版本的redis-py。
import redis
r = redis.Redis(host='10.211.55.4', port=6379)
r.set('foo', 'Bar')
print r.get('foo')
2、连接池
redis-py使用connection pool来管理对一个redis server的所有连接,避免每次建立、释放连接的开销。默认,每个Redis实例都会维护一个自己的连接池。可以直接建立一个连接池,然后作为参数Redis,这样就可以实现多个Redis实例共享一个连接池。
操作
1. String操作
redis中的String在在内存中按照一个name对应一个value来存储。如图:
set(name, value, ex=None, px=None, nx=False, xx=False)
setnx(name, value)
setex(name, value, time)
psetex(name, time_ms, value)
mset(*args, **kwargs)
get(name)
mget(keys, *args)
getset(name, value)
getrange(key, start, end)
setrange(name, offset, value)
setbit(name, offset, value)
*用途举例,用最省空间的方式,存储在线用户数及分别是哪些用户在线
getbit(name, offset)
bitcount(key, start=None, end=None)
strlen(name)
incr(self, name, amount=1)
incrbyfloat(self, name, amount=1.0)
decr(self, name, amount=1)
append(key, value)
2. Hash操作
hash表现形式上有些像pyhton中的dict,可以存储一组关联性较强的数据 , redis中Hash在内存中的存储格式如下图:
hset(name, key, value)
hmset(name, mapping)
hget(name,key)
hmget(name, keys, *args)
hgetall(name)
hlen(name)
hkeys(name)
hvals(name)
hexists(name, key)
hdel(name,*keys)
hincrby(name, key, amount=1)
hincrbyfloat(name, key, amount=1.0)
hscan(name, cursor=0, match=None, count=None)
Start a full hash scan with:
HSCAN myhash 0
Start a hash scan with fields matching a pattern with:
HSCAN myhash 0 MATCH order_*
Start a hash scan with fields matching a pattern and forcing the scan command to do more scanning with:
HSCAN myhash 0 MATCH order_* COUNT 1000
hscan_iter(name, match=None, count=None)
# 利用yield封装hscan创建生成器,实现分批去redis中获取数据
# 参数:
# match,匹配指定key,默认None 表示所有的key
# count,每次分片最少获取个数,默认None表示采用Redis的默认分片个数
# 如:
# for item in r.hscan_iter('xx'):
# print item
3. list
List操作,redis中的List在在内存中按照一个name对应一个List来存储。如图:
lpush(name,values)
lpushx(name,value)
llen(name)
linsert(name, where, refvalue, value))
r.lset(name, index, value)
r.lrem(name, value, num)
lpop(name)
lindex(name, index)
lrange(name, start, end)
ltrim(name, start, end)
rpoplpush(src, dst)
blpop(keys, timeout)
brpoplpush(src, dst, timeout=0)
4.set集合操作
Set操作,Set集合就是不允许重复的列表
sadd(name,values)
scard(name)
sdiff(keys, *args)
sdiffstore(dest, keys, *args)
sinter(keys, *args)
sinterstore(dest, keys, *args)
sismember(name, value)
smembers(name)
smove(src, dst, value)
spop(name)
srandmember(name, numbers)
srem(name, values)
sunion(keys, *args)
sunionstore(dest,keys, *args)
sscan(name, cursor=0, match=None, count=None)
sscan_iter(name, match=None, count=None)
有序集合,在集合的基础上,为每元素排序;元素的排序需要根据另外一个值来进行比较,所以,对于有序集合,每一个元素有两个值,即:值和分数,分数专门用来做排序。
zadd(name, *args, **kwargs)
zcard(name)
zcount(name, min, max)
zincrby(name, value, amount)
r.zrange( name, start, end, desc=False, withscores=False, score_cast_func=float)
zrank(name, value)
zrem(name, values)
zremrangebyrank(name, min, max)
zremrangebyscore(name, min, max)
zscore(name, value)
zinterstore(dest, keys, aggregate=None)
zunionstore(dest, keys, aggregate=None)
zscan(name, cursor=0, match=None, count=None, score_cast_func=float)
zscan_iter(name, match=None, count=None,score_cast_func=float)
其他常用操作
delete(*names)
exists(name)
keys(pattern='*')
expire(name ,time)
rename(src, dst)
move(name, db))
randomkey()
type(name)
scan(cursor=0, match=None, count=None)
scan_iter(match=None, count=None)
管道
redis-py默认在执行每次请求都会创建(连接池申请连接)和断开(归还连接池)一次连接操作,如果想要在一次请求中指定多个命令,则可以使用pipline实现一次请求指定多个命令,并且默认情况下一次pipline 是原子性操作。
发布订阅
发布者:服务器
订阅者:Dashboad和数据处理
Demo如下:
importredisclassRedisHelper:def __init__(self):
self.__conn = redis.Redis(host='10.211.55.4')
self.chan_sub= 'fm104.5'self.chan_pub= 'fm104.5'
defpublic(self, msg):
self.__conn.publish(self.chan_pub, msg)returnTruedefsubscribe(self):
pub= self.__conn.pubsub()
pub.subscribe(self.chan_sub)
pub.parse_response()return pub
redis helper
订阅者:
发布者:
更多参见:https://github.com/andymccurdy/redis-py/
http://doc.redisfans.com/
什么时候用关系型数据库,什么时候 用NoSQL?
Go for legacy relational databases (RDBMS) when:
The data is well structured, and lends itself to a tabular arrangement (rows and columns) in a relational database. Typical examples: bank account info, customer order info, customer info, employee info, department info etc etc.
Another aspect of the above point is : schema oriented data model. When you design a data model (tables, relationships etc) for a potential use of RDBMS, you need to come up with a well defined schema: there will be these many tables, each table having a known set of columns that store data in known typed format (CHAR, NUMBER, BLOB etc).
Very Important: Consider whether the data is transactional in nature. In other words, whether the data will be stored, accessed and updated in the context of transactions providing the ACID semantics or is it okay to compromise some/all of these properties.
Correctness is also important and any compromise is _unacceptable_. This stems from the fact that in most NoSQL databases, consistency is traded off in favor of performance and scalability (points on NoSQL databases are elaborated below).
There is no strong/compelling need for a scale out architecture ; a database that linearly scales out (horizontal scaling) to multiple nodes in a cluster.
The use case is not for “high speed data ingestion”.
If the client applications are expecting to quickly stream large amounts of data in/out of the database then relational database may not be a good choice since they are not really designed for scaling write heavy workloads.
In order to achieve ACID properties, lots of additional background work is done especially in writer (INSERT, UPDATE, DELETE) code paths. This definitely affects performance.
The use case is not for “storing enormous amounts of data in the range of petabytes”.
Go for NoSQL databases when:
There is no fixed (and predetermined) schema that data fits in:
Scalability, Performance (high throughput and low operation latency), Continuous Availability are very important requirements to be met by the underlying architecture of database.
Good choice for “High Speed Data Ingestion”. Such applications (for example IoT style) which generate millions of data points in a second and need a database capable of providing extreme write scalability.
The inherent ability to horizontally scale allows to store large amounts of data across commodity servers in the cluster. They usually use low cost resources, and are able to linearly add compute and storage power as the demand grows.
source page https://www.quora.com/When-should-you-use-NoSQL-vs-regular-RDBMS
附赠redis性能测试
准备环境:
因为找不到可用的1000M网络机器,使用一根直通线将两台笔记本连起来组成1000M Ethernet网。没错,是直通线现在网卡都能自适应交叉线、直通线,速度不受影响,用了一段时间机器也没出问题。
服务端:T420 i5-2520M(2.5G)/8G ubuntu 11.10
客户端:Acer i5-2430M(2.4G)/4G mint 11
redis版本:2.6.9
测试脚本:./redis-benchmark -h xx -p xx -t set -q -r 1000 -l -d 20
长度
速度/sec
带宽(MByte/s) 发送+接收
CPU
CPU Detail
20Byte
17w
24M+12M
98.00%
Cpu0 : 21.0%us, 40.7%sy, 0.0%ni, 4.3%id, 0.0%wa, 0.0%hi, 34.0%si, 0.0%st
100Byte
17w
37M+12M
97.00%
Cpu0 : 20.3%us, 37.9%sy, 0.0%ni, 7.0%id, 0.0%wa, 0.0%hi, 34.9%si, 0.0%st
512Byte
12w
76M+9M
87.00%
Cpu0 : 20.9%us, 33.2%sy, 0.0%ni, 25.6%id, 0.0%wa, 0.0%hi, 20.3%si, 0.0%st
1K
9w
94M+8M
81.00%
Cpu0 : 19.9%us, 30.2%sy, 0.0%ni, 34.2%id, 0.0%wa, 0.0%hi, 15.6%si, 0.0%st
2K
5w
105M+6M
77.00%
Cpu0 : 18.0%us, 32.0%sy, 0.0%ni, 34.7%id, 0.0%wa, 0.0%hi, 15.3%si, 0.0%st
5K
2.2w
119M+3.2M
77.00%
Cpu0 : 22.5%us, 32.8%sy, 0.0%ni, 32.8%id, 0.0%wa, 0.0%hi, 11.9%si, 0.0%st
10K
1.1w
119M+1.7M
70.00%
Cpu0 : 18.2%us, 29.8%sy, 0.0%ni, 42.7%id, 0.0%wa, 0.0%hi, 9.3%si, 0.0%st
20K
0.57w
120M+1M
58.00%
Cpu0 : 17.8%us, 26.4%sy, 0.0%ni, 46.2%id, 0.0%wa, 0.0%hi, 9.6%si, 0.0%st
value 在1K以上时,1000M网卡轻松的被跑慢,而且redis-server cpu连一个核心都没占用到,可见redis高效,redis的服务也不需要太高配置,瓶颈在网卡速度。
整理看redis的us都在20%左右,用户层代码资源占用比例都很小。
最后
以上就是娇气嚓茶为你收集整理的python redis缓存_python 之路,Day12 - redis缓存数据库的全部内容,希望文章能够帮你解决python redis缓存_python 之路,Day12 - redis缓存数据库所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复