概述
http://blog.csdn.net/pipisorry/article/details/39506169
数据分析和建模方面的大量编程工作都是用在数据准备上的:加载、清理、转换以及重 塑。有时候,存放在文件或数据库中的数据并不能满足数据处理应用的要求。
pandas和Python标准库提供了一组高级的、灵活的、高效的核心函数和算法,它们能够轻松地将数据规整化为正确的形式。
数据正则化data normalization
pandas.dataframe每行都减去行平均值
use DataFrame's sub method and specify that the subtraction should happen row-wise (axis=0) as opposed to the default column-wise behaviour:
df.sub(df.mean(axis=1), axis=0)
相当于
norm_ui_array = df.values
for i in range(len(norm_ui_array)):
norm_ui_array[i][norm_ui_array[i] != np.NaN] -= user_rat_mean[i]
[Pandas: Subtract row mean from each element in row]
皮皮Blog
分组Grouping
By “group by” we are referring to a process involving one or more of the following steps
- Splitting the data into groups based on some criteria
- Applying a function to each group independently
- Combining the results into a data structure
Note: 分组后成为groupby对象,groupby取属性或者聚合后就成为df对象,取某列就成为series对象。
group后的索引
分组后的索引还是原来的索引,不过分组再聚合后的索引就不一样了。
分组示例
In [86]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
....:
'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
....:
'C' : np.random.randn(8),
....:
'D' : np.random.randn(8)})
In [87]: df
A
B
C
D
0
foo
one -1.202872 -0.055224
1
bar
one -1.814470
2.395985
2
foo
two
1.018601
1.552825
3
bar
three -0.595447
0.166599
4
foo
two
1.395433
0.047609
5
bar
two -0.392670 -0.136473
6
foo
one
0.007207 -0.561757
7
foo
three
1.928123 -1.623033
Grouping and then applying a function sum to the resulting groups.
In [88]: df.groupby('A').sum()
Out[88]:
C
D
A
bar -2.802588
2.42611
foo
3.146492 -0.63958
Grouping by multiple columns forms a hierarchical index, which we then applythe function.
In [89]: df.groupby(['A','B']).sum()
Out[89]:
C
D
A
B
bar one
-1.814470
2.395985
three -0.595447
0.166599
two
-0.392670 -0.136473
foo one
-1.195665 -0.616981
three
1.928123 -1.623033
two
2.414034
1.600434
嵌套索引
每个sid对应一些时间,每个时间又对应一些uid,统计uid个数。
shop_ids = user_pay_df['sid'].unique()
s_group = user_pay_df.groupby(by=['sid', 'time']).count()
print(s_group)
for sid in shop_ids:
print(s_group.ix[sid])
索引时候可以先对最外层ix进行索引,就可以对不同的sid单独进行某些操作了。
uid
sid time
90 2015-07-06 3
2015-07-07 10
2015-07-08 7
2015-07-09 4
uid
time
2015-07-02 4
2015-07-03 1
2015-07-04 3
嵌套索引也可以通过下面的方式实现
group_process = lambda g: g.groupby(by=['time']).count()
s_group = user_pay_df.groupby(by=['sid']).apply(group_process)
print(s_group.ix[90])
不过奇怪的多了一列
uid sid
time
2015-07-06 3 3
2015-07-07 10 10
2015-07-08 7 7
2015-07-09 4 4
2015-07-10 23 23
[Nested groupby in DataFrame]
[Need for speed: Slow nested groupbys and applys in Pandas]
分组对象的属性GroupBy object attributes
Note: 这里属性应该就是方法吧。
The groups attribute is a dict whose keys are the computed unique groupsand corresponding values being the axis labels belonging to each group.
gb.<TAB>
gb.agg
gb.boxplot
gb.cummin
gb.describe
gb.filter
gb.get_group
gb.height
gb.last
gb.median
gb.ngroups
gb.plot
gb.rank
gb.std
gb.transform
gb.aggregate
gb.count
gb.cumprod
gb.dtype
gb.first
gb.groups
gb.hist
gb.max
gb.min
gb.nth
gb.prod
gb.resample
gb.sum
gb.var
gb.apply
gb.cummax
gb.cumsum
gb.fillna
gb.gender
gb.head
gb.indices
gb.mean
gb.name
gb.ohlc
gb.quantile
gb.size
gb.tail
gb.weight
In [19]: df.groupby('A').groups
Out[19]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [21]: gb = df.groupby(['A', 'B'])
In [22]: gb.groups
Out[22]:
{('bar', 'one'): [1],
('bar', 'three'): [3], ...
('foo', 'two'): [2, 4]}
In [23]: len(grouped)
Out[23]: 6
[GroupBy object attributes]
没有reduce的分组
如果只是想分组,而不想reduce,可以使用set_index重建索引实现
ut_index = lines.set_index(['user', 'location_id'])
user check-in_time location_id
0 0 2010-10-19 23:55:27 22847
1 0 2010-10-18 22:17:43 420315
2 0 2010-10-17 23:42:03 316637
3 0 2010-10-17 19:26:05 16516
4 0 2010-10-16 18:50:42 5535878
check-in_time
user location_id
0 22847 2010-10-19 23:55:27
420315 2010-10-18 22:17:43
316637 2010-10-17 23:42:03
16516 2010-10-17 19:26:05[参考列转换为行索引set_index部分]
遍历分组Iterating through groups
In the case of grouping by multiple keys, the group name will be a tuple:
for name, group in df.groupby(['A', 'B']): pass
[Iterating through groups]
选择组Selecting a group
A single group can be selected using GroupBy.get_group():
In [44]: grouped.get_group('bar')
A
B
C
D
1
bar
one -0.042379 -0.089329
3
bar
three -0.009920 -0.945867
5
bar
two
0.495767
1.956030
Or for an object grouped on multiple columns:
In [45]: df.groupby(['A', 'B']).get_group(('bar', 'one'))
A
B
C
D
1
bar
one -0.042379 -0.089329
[Grouping section]
[pandas 数据分组运算]
分组聚合Aggregation
分组聚合后,结果会将组名作为新的index。the result of the aggregation will have the group names as the new index along the grouped axis. In the case of multiple keys, the result is a MultiIndex by default, though this can bechanged by using the as_index option.
修改这种默认方式可以使用df.groupby('A', as_index=False).sum()或者df.groupby(['A', 'B']).sum().reset_index()
Once the GroupBy object has been created, several methods are available to perform a computation on the grouped data.
An obvious one is aggregation via the aggregate or equivalently agg method: ... ....
Another simple aggregation example is to compute the size of each group.This is included in GroupBy as the size method. It returns a Series whoseindex are the group names and whose values are the sizes of each group.
In [54]: grouped.size()
Out[54]:
A
B
bar
one
1
three
1
two
1
foo
one
2
three
1
two
2
dtype: int64
In [55]: grouped.describe()
Out[55]:
C
D
0 count
1.000000
1.000000
mean
-0.042379 -0.089329
std
NaN
NaN
min
-0.042379 -0.089329
25%
-0.042379 -0.089329
50%
-0.042379 -0.089329
75%
-0.042379 -0.089329
...
...
...
5 mean
0.768928 -0.297134
std
0.677005
0.898022
min
0.290213 -0.932132
25%
0.529570 -0.614633
50%
0.768928 -0.297134
75%
1.008285
0.020364
max
1.247642
0.337863
[48 rows x 2 columns]
自定义的聚合函数Applying different functions to DataFrame columns
自定义的聚合函数,通过传入 GroupBy.aggregate() 或 GroupBy.agg() 来实现。
By passing a dict to aggregate you can apply a different aggregation to thecolumns of a DataFrame:
In [60]: grouped.agg({'C' : np.sum,
....:
'D' : lambda x: np.std(x, ddof=1)})
....:
Out[60]:
C
D
A
bar
0.443469
1.490982
foo
2.529056
0.645875
[Applying different functions to DataFrame columns]
数据分组转换transformation
对数组在分组内进行转换,而不是全局转换。如实现每组数据都减去其组的均值功能等等。
2002-09-30
0.660294
2002-10-01
0.631095
2002-10-02
0.673601
2002-10-03
0.709213
2002-10-04
0.719369
Freq: D, dtype: float64
In [71]: key = lambda x: x.year
In [72]: zscore = lambda x: (x - x.mean()) / x.std()
In [73]: transformed = ts.groupby(key).transform(zscore)
[Transformation¶]
过滤Filtration
[Filtration]
[Group By: split-apply-combine*
- Splitting an object into groups
- GroupBy sorting
- GroupBy object attributes
- GroupBy with MultiIndex
- DataFrame column selection in GroupBy
- Iterating through groups
- Selecting a group
- Aggregation
- Applying multiple functions at once
- Applying different functions to DataFrame columns
- Cython-optimized aggregation functions
- Transformation
- Filtration
- Dispatching to instance methods
- Flexible apply
- Other useful features
]
示例
1将组的信息聚合成一个list
df[['userID', 'VenueId']].groupby(['userID']).aggregate(lambda x: list(x))
VenueId
userID
1 [4d8933fc6daeb60c31b04ae0, 4b5b5cfdf964a520e9f...
2 [4c783542748cb713954a3668, 49bd4e25f964a520675...
3 [4c30ffe0ed37a5937fba6903]
4 [4cd047813e63721ea62d9dcc, 4511eb8cf964a5209a3...
Note:
1 如果每组有多列,则是将组内的每一列单独的都合并成list。
2 有个小问题就是:print(type(tmp.ix[4]))是<class 'pandas.core.series.Series'> 而print(type(tmp.ix[4][0]))才是<class 'list'>。所以agg后最好返回series对象,而不是df。
2 找出vid对应的独立uid个数大于CommonCnt的df的索引
venueid_users_cnt = df.groupby('VenueId').count()['userID']
venueid_users_cnt = venueid_users_cnt[venueid_users_cnt > 1].index
venueid_users_cnt = df[['userID', 'VenueId']].groupby('VenueId').agg(lambda x: len(set(x)))['userID']
venueid_users_cnt = venueid_users_cnt.index[venueid_users_cnt > CommonCnt]
Note: 注意agg后是一个df,要转换成series。
过滤掉小于CommonCnt的数据
df = df[df['VenueId'].isin(venueid_users_cnt)]
皮皮Blog
合并数据集
pandas对象中的数据可以通过一些内置的方式进行合并:
pandas .merge可根据一个或多个键将不同DataFrame中的行连接起来。
Pandas.concat可以沿着一条轴将多个对象堆叠到一起。
实例方法combine_first可以将重复数据编接在一起,用一个对象中的值填充另一个对象中的缺失值。
数据库风格的DataFrame合并merge
相当于数据库中的join操作,数据集的合并(merge)或连接(join)运算是通过一个或多个键将行链接起来的。这些运算是关系型数据库的核心。pandas的merge函数是对数据应用这些算法的主要切入点。
merge函数的参数
参数 说明
left 参与合并的左侧DataFrame
right 参与合并的右侧DataFrame
how “inner”、 “outer”、 “left”、 “right” 其中之一。默认为“inner”
on 用于连接的列名。必须存在于左右两个DataFrame对象中。如果未指定,且其他连接键也未指定,则以left和right列名的交集作为连接键
left_on 左侧DataFrame中用作连接键的列
right_on 右侧DataFrame中用作连接键的列
leftjndex 将左侧的行索引用作其连接键
rightjndex 类似于leftjndex
sort 根据连接键对合并后的数据进行排序,默认为True。有时在处理大数据集时,禁用该选项可获得更好的性能
suffixes 字符串值元组,用于追加到重叠列名的末尾,默认为('_x', '_y')。
copy 设置为False,可以在某些特殊情况下避免将数据复制到结果数据结构中。默认总是复制。
多对一合并
>>>import pandas as pd
>>> df1 = pd.DataFrame({'key': ['b', 'b', 'a','c','a','a','b'],'data1': range(7)})
>>>df2 = pd.DataFrame({'key': ['a', 'b', 'd'],'data2': range(3)})
df1
Out[5]:
data1 key
0 0 b
1 1 b
2 2 a
3 3 c
4 4 a
5 5 a
6 6 b
df2
Out[6]:
data2 key
0 0 a
1 1 b
2 2 d
这是一种多对一的合并。df1中的数据有多个被标记为a和b的行,而df2中key列的每个值 则仅对应一行。对这些对象调用merge即可得到:
pd.merge(df1, df2)
Out[7]:
data1 key data2
0 0 b 1
1 1 b 1
2 6 b 1
3 2 a 0
4 4 a 0
5 5 a 0
注意,并没有指明要用哪个列迸行连接。如果没有指定,merge就会将重叠列的列名当做键。不过,最好显式指定一下:
pd.merge(df1, df2, on='key')
两个对象的列名不同
如果两个对象的列名不同,也可分别进行指定:
>>>df3 = DataFrame({'lkey': ['b', 'b', 'a','c','a','a','b'],
'data1': range(7)})
>>>df4 = DataFrame({'rkey': ['a', 'b','d'],
'data2': range(3)})
>>>pd.merge(df3, df4, left_on='lkey', right_on='rkey')
结果里面c和d以及与之相关的数据消失了。默认情况下,merge做的是“inner”连接:结果中的键是交集。其他方式还有“left”、“right”以及 “outer”。外连接求取的是键的并集,组合了左连接和右连接的效果:
pd.merge(df1, df2, how='outer')
Out[8]:
data1 key data2
0 0.0 b 1.0
1 1.0 b 1.0
2 6.0 b 1.0
3 2.0 a 0.0
4 4.0 a 0.0
5 5.0 a 0.0
6 3.0 c NaN
7 NaN d 2.0
多对多合并
多对多的合并操作非常简单,无需额外的工作。
>>> df1 = pd.DataFrame({'key': ['b','b','a','c','a','b'],'data1': range(6)})
>>>df2 = pd.DataFrame({'key': ['a','b','a','b','d'],'data2': range(5)})
>>> pd.merge(df1, df2, on='key', how='left')
data1 key data2
0 0 b 1.0
1 0 b 3.0
2 1 b 1.0
3 1 b 3.0
4 2 a 0.0
5 2 a 2.0
6 3 c NaN
7 4 a 0.0
8 4 a 2.0
9 5 b 1.0
10 5 b 3.0
多对多连接产生的是行的笛卡尔积。由于左边的DataFrame有3个“b”行,右边的有2个,所以域终结果中就有6个“b”行。
连接方式只影响出现在结果中的键:
>>> pd.merge(df1, df2, how='inner')
多键合并
要根据多个键进行合并,传入一个由列名组成的列表即可:
>>>left= pd.DataFrame({'key1': ['foo','foo','bar'],'key2': ['one', 'two', 'one'],'lval': [1,2,3]})
>>>right = pd.DataFrame({'key1': ['foo', 'foo', 'bar', 'bar'], 'key2': ['one', 'one', 'one', 'two'],'lval': [4, 5, 6, 7]})
>>>pd.merge(left, right, on=['key1', 'key2'], how='outer')
key1 key2 lval_x lval_y
0 foo one 1.0 4.0
1 foo one 1.0 5.0
2 foo two 2.0 NaN
3 bar one 3.0 6.0
4 bar two NaN 7.0
结果中会出现哪些键组合取决于所选的合并方式,也可以这样来理解:多个键形成一系列元组,并将其当做单个连接键(当然,实际上并不是这么回事)。
Note:在进行列-列连接时,DataFrame对象中的索引会被丢弃。
重复列名的处理
对于合并运算需要考虑的最后一个问题是对重复列名的处理。虽然可以手工处理列名重叠的问题,但merge有一个更实用的suffixes选项,用于指定附加到左右两个DataFrame对象的重叠列名上的字符串:
>>>pd.merge(left, right, on='key1', suffixes=('_left', '_right'))
Note: 看上面的结果,默认suffixes=('_x', '_y')
索引上的合并
有时DataFrame中的连接键位于其索引中。在这种情况下,你可以传入left_index=True或right_index=True (或两个都传)以说明索引应该被用作连接键:
>>>left1 = pd.DataFrame({'key':['a','b','a','a','b','c'],'value': range(6)})
>>>right1 = pd.DataFrame({'group_val': [3.5,7]}, index=['a','b'])
left1
Out[18]:
key value
0 a 0
1 b 1
2 a 2
3 a 3
4 b 4
5 c 5
right1
Out[19]:
group_val
a 3.5
b 7.0
pd.merge(left1, right1, left_on='key',right_index=True)
Out[20]:
key value group_val
0 a 0 3.5
2 a 2 3.5
3 a 3 3.5
1 b 1 7.0
4 b 4 7.0
merge层次化索引数据
对于层次化索引的数据:
>>>lefth = pd.DataFrame({'key1': ['Ohio','Ohio','Ohio','Nevada','Nevada'],
'key2': [2000, 2001, 2002, 2001, 2002],
'data': np.arange(5.)})
>>>righth = pd.DataFrame(np.arange(12).reshape((6, 2)),
index=[['Nevada', 'Nevada', 'Ohio', 'Ohio', 'Ohio', 'Ohio'], [2001, 2000, 2000, 2000, 2001, 2002]],
columns=['event1', 'event2'])
lefth
Out[24]:
data key1 key2
0 0.0 Ohio 2000
1 1.0 Ohio 2001
2 2.0 Ohio 2002
3 3.0 Nevada 2001
4 4.0 Nevada 2002
righth
Out[25]:
event1 event2
Nevada 2001 0 1
2000 2 3
Ohio 2000 4 5
2000 6 7
2001 8 9
2002 10 11
这种情况下,必须以列表的形式指明用作合并键的多个列(注意对重复索引值的处理):
pd.merge(lefth, righth, left_on=['key1','key2'],right_index=True, how= 'outer')
Out[7]:
data key1 key2 event1 event2
0 0.0 Ohio 2000.0 4.0 5.0
0 0.0 Ohio 2000.0 6.0 7.0
1 1.0 Ohio 2001.0 8.0 9.0
2 2.0 Ohio 2002.0 10.0 11.0
3 3.0 Nevada 2001.0 0.0 1.0
4 4.0 Nevada 2002.0 NaN NaN
4 NaN Nevada 2000.0 2.0 3.0
DataFrame.join实例方法
Join: SQL style merges. See the Database style joining
DataFrame还有一个join实例方法,它能更为方便地实现按索引合并。它还可用于合并多个带有相同或相似索引的DataFrame对象,而不管它们之间有没有重叠的列。
left2.join(right2,how='outer')
由于一些历史原因(早期版本的pandas),DataFrame的join方法是在连接键上做左连接。
它还支持参数DataFrame的索引跟调用者DataFrame的某个列之间的连接:
left1.join(right1, on='key')
最后,对于简单的索引合并,还可以向join传入一组DataFrame (后面会介绍更为通用的concat函数,它也能实现此功能):
>>>another = DataFrame([[7.,8.], [9.,10.], [11.,12.], [16.,17.]],index=['a', 'c','e','f'], columns=['New York','Oregon'])
>>>left2.join([right2, another])
>>>left2.join([right2, another], how='outer')
In [77]: left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
In [78]: right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
In [79]: left
Out[79]:
key
lval
0
foo
1
1
foo
2
In [80]: right
Out[80]:
key
rval
0
foo
4
1
foo
5
In [81]: pd.merge(left, right, on='key')
Out[81]:
key
lval
rval
0
foo
1
4
1
foo
1
5
2
foo
2
4
3
foo
2
5
轴向连接concat
另一种数据合并运算concat也被称作连接(concatenation)、绑定(binding)或堆叠(stacking)。
Concat
pandas provides various facilities for easily combining together Series,DataFrame, and Panel objects with various kinds of set logic for the indexesand relational algebra functionality in the case of join / merge-typeoperations.
See the Merging section
Concatenating pandas objects together with concat():
In [73]: df = pd.DataFrame(np.random.randn(10, 4))
In [74]: df
Out[74]:
0
1
2
3
0 -0.548702
1.467327 -1.015962 -0.483075
1
1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952
0.991460 -0.919069
0.266046
3 -0.709661
1.669052
1.037882 -1.705775
4 -0.919854 -0.042379
1.247642 -0.009920
5
0.290213
0.495767
0.362949
1.548106
6 -1.131345 -0.089329
0.337863 -0.945867
7 -0.932132
1.956030
0.017587 -0.016692
8 -0.575247
0.254161 -1.143704
0.215897
9
1.193555 -0.077118 -0.408530 -0.862495
# break it into pieces
In [75]: pieces = [df[:3], df[3:7], df[7:]]
In [76]: pd.concat(pieces)
Out[76]:
0
1
2
3
0 -0.548702
1.467327 -1.015962 -0.483075
1
1.637550 -1.217659 -0.291519 -1.745505
2 -0.263952
0.991460 -0.919069
0.266046
3 -0.709661
1.669052
1.037882 -1.705775
4 -0.919854 -0.042379
1.247642 -0.009920
5
0.290213
0.495767
0.362949
1.548106
6 -1.131345 -0.089329
0.337863 -0.945867
7 -0.932132
1.956030
0.017587 -0.016692
8 -0.575247
0.254161 -1.143704
0.215897
9
1.193555 -0.077118 -0.408530 -0.862495
concat函数的参数
objs 参与连接的pandas对象的列表或字典。唯一必需的参数
axis 指明连接的轴向,默认为0
join “inner”、 “outer”其中之_,默认为“outer”。指明其他轴向上的索引是按交集(inner)还是并集(outer)进行合并
join_axes 指明用于其他n-1条轴的索引,不执行并集/交集运算
keys 与连接对象有关的值,用于形成连接轴向上的层次化索引。可以是任 意值的列表或数组、元组数组、数组列表(如果将levels设置成多级数组的话)
levels 指定用作层次化索引各级别上的索引,如果设置了keys的话
names 用于创建分层级别的名称,如果设置了keys和(或)levels的话
verify_integrity 检查结果对象新轴上的重复情况,如果发现则引发异常。默认(False)允许重复
ignore_index 不保留连接轴上的索引,产生一组新索引range(total_length)
NumPy的concatenation函数
NumPy有一个用于合并原始NumPy数组的concatenation函数:
>>>arr = np.arange(12).reshape((3, 4))
>>> arr
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> np.concatenate([arr, arr], axis=1)
array([[ 0, 1, 2, 3, 0, 1, 2, 3],
[ 4, 5, 6, 7, 4, 5, 6, 7],
[ 8, 9, 10, 11, 8, 9, 10, 11]])
pandas对象连接运算concat
对于pandas对象(如Series和DataFrame),带有标签的轴能够进一步推广数组的连接运算。
假设有三个没有重叠索引的Series:
s1 = pd.Series([0, 1], index=['a','b'])
s2 = pd.Series([2, 3, 4], index=['c','d','e'])
s3 = pd.Series([5, 6], index=['f', 'g'])
对这些对象调用concat可以将值和索引粘合在一起:
>>> pd.concat([s1,s2,s3])
a 0
b 1
c 2
d 3
e 4
f 5
g 6
dtype: int64
默认情况下,concat是在axis=0上工作的,最终产生一个新的Series。如果传入axis=1, 则结果就会变成一个DataFrame (axis=1是列):
pd.concat([s1, s2, s3], axis=1)
Out[18]:
0 1 2
a 0.0 NaN NaN
b 1.0 NaN NaN
c NaN 2.0 NaN
d NaN 3.0 NaN
e NaN 4.0 NaN
f NaN NaN 5.0
g NaN NaN 6.0
这种情况下,另外一条轴上没有重叠,从索引的有序并集(外连接)上就可以看出来。 传入join=‘inner’即可得到它们的交集:
s4 = pd.concat([s1 * 5, s3])
Out[20]:
a 0
b 5
f 5
g 6
dtype: int64
pd.concat([s1,s4], axis=1)
Out[21]:
0 1
a 0.0 0
b 1.0 5
f NaN 5
g NaN 6
pd.concat([s1,s4], axis=1, join='inner')
Out[22]:
0 1
a 0 0
b 1 5
可以通过join_axes指定要在其他轴上使用的索引:
pd.concat([s1, s4], axis=1, join_axes=[['a','c','b','e']])
0 1
a 0.0 0.0
c NaN NaN
b 1.0 5.0
e NaN NaN
连接轴上创建层次化索引区分连接片段
想要参与连接的片段在结果中区分开。可在连接轴上创建一个层次化索引。使用keys参数即可达到这个目的:
pd.concat([s1,s1,s3])
a 0
b 1
a 0
b 1
f 5
g 6
dtype: int64
pd.concat([s1,s1,s3], keys=['one','two','three'])
one a 0
b 1
two a 0
b 1
three f 5
g 6
dtype: int64
如果沿着axis=1对Series进行合并,则keys就会成为DataFrame的列头:
pd.concat([s1,s2,s3], axis=1, keys=['one','two','three'])
同样的逻辑对DataFrame对象
>>>df1 = pd.DataFrame(np.arange(6).reshape(3,2),index=['a','b','c'],columns=['one','two'])
>>>df2 = pd.DataFrame(5 + np.arange(4).reshape(2,2), index=['a','c'],columns=['three','four'])
In [29]: df1
Out[29]:
one two
a 0 1
b 2 3
c 4 5
In [30]: df2
Out[30]:
three four
a 5 6
c 7 8
In [31]: pd.concat([df1,df2],axis=1,keys=['level1','level2'])
Out[31]:
level1 level2
one two three four
a 0 1 5.0 6.0
b 2 3 NaN NaN
c 4 5 7.0 8.0
如果传入的不是列表而是一个字典,则字典的键就会被当做keys选项的值
>>>pd.concat({'level1':df1,'level2':df2}, axis=1)
两个用于管理层次化索引创建方式的参数
>>> pd.concat([df1, df2], axis=1, keys=['level1','level2'],names=['upper','lower'])
DataFrame行索引:
>>>df1 = pd.DataFrame(np.random.randn(3,4),columns=['a','b','c','d'])
>>>df2 = pd.DataFrame(np.random.randn(2,3), columns=['b','d','a'])
In [36]: df1
Out[36]:
a b c d
0 0.208624 -1.061705 -0.517955 -1.131341
1 -0.554871 -1.162995 -2.177741 -0.148387
2 -2.521862 -1.307174 -0.505220 0.560942
In [37]: df2
Out[37]:
b d a
0 0.508929 0.002077 -1.851976
1 2.745318 0.087151 0.951520
在这种情况下,传入ignore_index=True即可:
In [38]: pd.concat([df1, df2],ignore_index=True)
Out[38]:
a b c d
0 0.208624 -1.061705 -0.517955 -1.131341
1 -0.554871 -1.162995 -2.177741 -0.148387
2 -2.521862 -1.307174 -0.505220 0.560942
3 -1.851976 0.508929 NaN 0.002077
4 0.951520 2.745318 NaN 0.087151
Append
Append rows to a dataframe. See the Appending
In [82]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
In [83]: df
Out[83]:
A
B
C
D
0
1.346061
1.511763
1.627081 -0.990582
1 -0.441652
1.211526
0.268520
0.024580
2 -1.577585
0.396823 -0.105381 -0.532532
3
1.453749
1.208843 -0.080952 -0.264610
4 -0.727965 -0.589346
0.339969 -0.693205
5 -0.339355
0.593616
0.884345
1.591431
6
0.141809
0.220390
0.435589
0.192451
7 -0.096701
0.803351
1.715071 -0.708758
In [84]: s = df.iloc[3]
In [85]: df.append(s, ignore_index=True)
Out[85]:
A
B
C
D
0
1.346061
1.511763
1.627081 -0.990582
1 -0.441652
1.211526
0.268520
0.024580
2 -1.577585
0.396823 -0.105381 -0.532532
3
1.453749
1.208843 -0.080952 -0.264610
4 -0.727965 -0.589346
0.339969 -0.693205
5 -0.339355
0.593616
0.884345
1.591431
6
0.141809
0.220390
0.435589
0.192451
7 -0.096701
0.803351
1.715071 -0.708758
8
1.453749
1.208843 -0.080952 -0.264610
[Merge, join, and concatenate¶]
皮皮Blog
重塑Reshaping
See the sections on Hierarchical Indexing andReshaping.
Stack
In [90]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
....:
'foo', 'foo', 'qux', 'qux'],
....:
['one', 'two', 'one', 'two',
....:
'one', 'two', 'one', 'two']]))
....:
In [91]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
In [92]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
In [93]: df2 = df[:4]
In [94]: df2
Out[94]:
A
B
first second
bar
one
0.029399 -0.542108
two
0.282696 -0.087302
baz
one
-1.575170
1.771208
two
0.816482
1.100230
The stack() method “compresses” a level in the DataFrame’scolumns.
In [95]: stacked = df2.stack()
In [96]: stacked
Out[96]:
first
second
bar
one
A
0.029399
B
-0.542108
two
A
0.282696
B
-0.087302
baz
one
A
-1.575170
B
1.771208
two
A
0.816482
B
1.100230
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as theindex), the inverse operation ofstack() isunstack(), which by default unstacks the last level:
In [97]: stacked.unstack()
Out[97]:
A
B
first second
bar
one
0.029399 -0.542108
two
0.282696 -0.087302
baz
one
-1.575170
1.771208
two
0.816482
1.100230
In [98]: stacked.unstack(1)
Out[98]:
second
one
two
first
bar
A
0.029399
0.282696
B -0.542108 -0.087302
baz
A -1.575170
0.816482
B
1.771208
1.100230
In [99]: stacked.unstack(0)
Out[99]:
first
bar
baz
second
one
A
0.029399 -1.575170
B -0.542108
1.771208
two
A
0.282696
0.816482
B -0.087302
1.100230
Pivot Tables
See the section on Pivot Tables.
In [100]: df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
.....:
'B' : ['A', 'B', 'C'] * 4,
.....:
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
.....:
'D' : np.random.randn(12),
.....:
'E' : np.random.randn(12)})
.....:
In [101]: df
Out[101]:
A
B
C
D
E
0
one
A
foo
1.418757 -0.179666
1
one
B
foo -1.879024
1.291836
2
two
C
foo
0.536826 -0.009614
3
three
A
bar
1.006160
0.392149
4
one
B
bar -0.029716
0.264599
5
one
C
bar -1.146178 -0.057409
6
two
A
foo
0.100900 -1.425638
7
three
B
foo -1.035018
1.024098
8
one
C
foo
0.314665 -0.106062
9
one
A
bar -0.773723
1.824375
10
two
B
bar -1.170653
0.595974
11
three
C
bar
0.648740
1.167115
We can produce pivot tables from this data very easily:
In [102]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
Out[102]:
C
bar
foo
A
B
one
A -0.773723
1.418757
B -0.029716 -1.879024
C -1.146178
0.314665
three A
1.006160
NaN
B
NaN -1.035018
C
0.648740
NaN
two
A
NaN
0.100900
B -1.170653
NaN
C
NaN
0.536826
[Reshaping and Pivot Tables]
from: http://blog.csdn.net/pipisorry/article/details/39506169
ref: [API Reference]
[pandas 数据规整]
转载于:https://my.oschina.net/u/3579120/blog/1508228
最后
以上就是喜悦睫毛为你收集整理的pandas小记:pandas数据规整化-分组合并及重塑分组Grouping合并数据集重塑Reshaping的全部内容,希望文章能够帮你解决pandas小记:pandas数据规整化-分组合并及重塑分组Grouping合并数据集重塑Reshaping所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复