博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
sklearn——数据集调用及应用
阅读量:4942 次
发布时间:2019-06-11

本文共 8573 字,大约阅读时间需要 28 分钟。

忙了许久,总算是又想起这边还没写完呢。

  那今天就写写sklearn库的一部分简单内容吧,包括数据集调用,聚类,轮廓系数等等。
 

自带数据集API

数据集函数 中文翻译 任务类型 数据规模
load_boston Boston房屋价格 回归 506*13
fetch_california_housing 加州住房 回归 20640*9
load_diabetes 糖尿病 回归 442*10
load_digits 手写字 分类 1797*64
load_breast_cancer 乳腺癌 分类、聚类 (357+212)*30
load_iris 鸢尾花 分类、聚类 (50*3)*4
load_wine 葡萄酒 分类 (59+71+48)*13
load_linnerud 体能训练 多分类 20

 

提取信息关键字:

  • DESCR:数据集的描述信息
  • data:内部数据
  • feature_names:数据字段名
  • target:数据标签
  • target_names:标签字段名(回归数据集无此项)
     

开始提取

  以load_iris为例。

# 导入是必须的from sklearn.datasets import load_irisiris = load_iris()
iris  # iris的所有信息,包括数据集、标签集、各字段名等

  这个输出太长太乱,而且后边也有,我就不复制过来了

 

iris.keys()  # 数据集关键字

dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names'])

 

descr = iris['DESCR']data = iris['data']feature_names = iris['feature_names']target = iris['target']target_names = iris['target_names']
descr

'Iris Plants Database\n====================\n\nNotes\n-----\nData Set Characteristics:\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n :Date: July, 1988\n\nThis is a copy of UCI ML iris datasets.\nhttp://archive.ics.uci.edu/ml/datasets/Iris\n\nThe famous Iris database, first used by Sir R.A Fisher\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\nReferences\n----------\n - Fisher,R.A. "The use of multiple measurements in taxonomic problems"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n Mathematical Statistics" (John Wiley, NY, 1950).\n - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n'

 

data

array([[5.1, 3.5, 1.4, 0.2],

[4.9, 3. , 1.4, 0.2],
[4.7, 3.2, 1.3, 0.2],
[4.6, 3.1, 1.5, 0.2],
[5. , 3.6, 1.4, 0.2],
[5.4, 3.9, 1.7, 0.4],
[4.6, 3.4, 1.4, 0.3],
[5. , 3.4, 1.5, 0.2],
[4.4, 2.9, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5.4, 3.7, 1.5, 0.2],
[4.8, 3.4, 1.6, 0.2],
[4.8, 3. , 1.4, 0.1],
[4.3, 3. , 1.1, 0.1],
[5.8, 4. , 1.2, 0.2],
[5.7, 4.4, 1.5, 0.4],
[5.4, 3.9, 1.3, 0.4],
[5.1, 3.5, 1.4, 0.3],
[5.7, 3.8, 1.7, 0.3],
[5.1, 3.8, 1.5, 0.3],
[5.4, 3.4, 1.7, 0.2],
[5.1, 3.7, 1.5, 0.4],
[4.6, 3.6, 1. , 0.2],
[5.1, 3.3, 1.7, 0.5],
[4.8, 3.4, 1.9, 0.2],
[5. , 3. , 1.6, 0.2],
[5. , 3.4, 1.6, 0.4],
[5.2, 3.5, 1.5, 0.2],
[5.2, 3.4, 1.4, 0.2],
[4.7, 3.2, 1.6, 0.2],
[4.8, 3.1, 1.6, 0.2],
[5.4, 3.4, 1.5, 0.4],
[5.2, 4.1, 1.5, 0.1],
[5.5, 4.2, 1.4, 0.2],
[4.9, 3.1, 1.5, 0.1],
[5. , 3.2, 1.2, 0.2],
[5.5, 3.5, 1.3, 0.2],
[4.9, 3.1, 1.5, 0.1],
[4.4, 3. , 1.3, 0.2],
[5.1, 3.4, 1.5, 0.2],
[5. , 3.5, 1.3, 0.3],
[4.5, 2.3, 1.3, 0.3],
[4.4, 3.2, 1.3, 0.2],
[5. , 3.5, 1.6, 0.6],
[5.1, 3.8, 1.9, 0.4],
[4.8, 3. , 1.4, 0.3],
[5.1, 3.8, 1.6, 0.2],
[4.6, 3.2, 1.4, 0.2],
[5.3, 3.7, 1.5, 0.2],
[5. , 3.3, 1.4, 0.2],
[7. , 3.2, 4.7, 1.4],
[6.4, 3.2, 4.5, 1.5],
[6.9, 3.1, 4.9, 1.5],
[5.5, 2.3, 4. , 1.3],
[6.5, 2.8, 4.6, 1.5],
[5.7, 2.8, 4.5, 1.3],
[6.3, 3.3, 4.7, 1.6],
[4.9, 2.4, 3.3, 1. ],
[6.6, 2.9, 4.6, 1.3],
[5.2, 2.7, 3.9, 1.4],
[5. , 2. , 3.5, 1. ],
[5.9, 3. , 4.2, 1.5],
[6. , 2.2, 4. , 1. ],
[6.1, 2.9, 4.7, 1.4],
[5.6, 2.9, 3.6, 1.3],
[6.7, 3.1, 4.4, 1.4],
[5.6, 3. , 4.5, 1.5],
[5.8, 2.7, 4.1, 1. ],
[6.2, 2.2, 4.5, 1.5],
[5.6, 2.5, 3.9, 1.1],
[5.9, 3.2, 4.8, 1.8],
[6.1, 2.8, 4. , 1.3],
[6.3, 2.5, 4.9, 1.5],
[6.1, 2.8, 4.7, 1.2],
[6.4, 2.9, 4.3, 1.3],
[6.6, 3. , 4.4, 1.4],
[6.8, 2.8, 4.8, 1.4],
[6.7, 3. , 5. , 1.7],
[6. , 2.9, 4.5, 1.5],
[5.7, 2.6, 3.5, 1. ],
[5.5, 2.4, 3.8, 1.1],
[5.5, 2.4, 3.7, 1. ],
[5.8, 2.7, 3.9, 1.2],
[6. , 2.7, 5.1, 1.6],
[5.4, 3. , 4.5, 1.5],
[6. , 3.4, 4.5, 1.6],
[6.7, 3.1, 4.7, 1.5],
[6.3, 2.3, 4.4, 1.3],
[5.6, 3. , 4.1, 1.3],
[5.5, 2.5, 4. , 1.3],
[5.5, 2.6, 4.4, 1.2],
[6.1, 3. , 4.6, 1.4],
[5.8, 2.6, 4. , 1.2],
[5. , 2.3, 3.3, 1. ],
[5.6, 2.7, 4.2, 1.3],
[5.7, 3. , 4.2, 1.2],
[5.7, 2.9, 4.2, 1.3],
[6.2, 2.9, 4.3, 1.3],
[5.1, 2.5, 3. , 1.1],
[5.7, 2.8, 4.1, 1.3],
[6.3, 3.3, 6. , 2.5],
[5.8, 2.7, 5.1, 1.9],
[7.1, 3. , 5.9, 2.1],
[6.3, 2.9, 5.6, 1.8],
[6.5, 3. , 5.8, 2.2],
[7.6, 3. , 6.6, 2.1],
[4.9, 2.5, 4.5, 1.7],
[7.3, 2.9, 6.3, 1.8],
[6.7, 2.5, 5.8, 1.8],
[7.2, 3.6, 6.1, 2.5],
[6.5, 3.2, 5.1, 2. ],
[6.4, 2.7, 5.3, 1.9],
[6.8, 3. , 5.5, 2.1],
[5.7, 2.5, 5. , 2. ],
[5.8, 2.8, 5.1, 2.4],
[6.4, 3.2, 5.3, 2.3],
[6.5, 3. , 5.5, 1.8],
[7.7, 3.8, 6.7, 2.2],
[7.7, 2.6, 6.9, 2.3],
[6. , 2.2, 5. , 1.5],
[6.9, 3.2, 5.7, 2.3],
[5.6, 2.8, 4.9, 2. ],
[7.7, 2.8, 6.7, 2. ],
[6.3, 2.7, 4.9, 1.8],
[6.7, 3.3, 5.7, 2.1],
[7.2, 3.2, 6. , 1.8],
[6.2, 2.8, 4.8, 1.8],
[6.1, 3. , 4.9, 1.8],
[6.4, 2.8, 5.6, 2.1],
[7.2, 3. , 5.8, 1.6],
[7.4, 2.8, 6.1, 1.9],
[7.9, 3.8, 6.4, 2. ],
[6.4, 2.8, 5.6, 2.2],
[6.3, 2.8, 5.1, 1.5],
[6.1, 2.6, 5.6, 1.4],
[7.7, 3. , 6.1, 2.3],
[6.3, 3.4, 5.6, 2.4],
[6.4, 3.1, 5.5, 1.8],
[6. , 3. , 4.8, 1.8],
[6.9, 3.1, 5.4, 2.1],
[6.7, 3.1, 5.6, 2.4],
[6.9, 3.1, 5.1, 2.3],
[5.8, 2.7, 5.1, 1.9],
[6.8, 3.2, 5.9, 2.3],
[6.7, 3.3, 5.7, 2.5],
[6.7, 3. , 5.2, 2.3],
[6.3, 2.5, 5. , 1.9],
[6.5, 3. , 5.2, 2. ],
[6.2, 3.4, 5.4, 2.3],
[5.9, 3. , 5.1, 1.8]])

 

feature_names

['sepal length (cm)',

'sepal width (cm)',
'petal length (cm)',
'petal width (cm)']

 

target

array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])

 

target_names

array(['setosa', 'versicolor', 'virginica'], dtype='<U10')

 


小试一下

from sklearn.cluster import KMeans  # 聚类包from sklearn.preprocessing import StandardScaler, MinMaxScaler  # 预处理包
# 标准差标准化# 公式:(x-mean(X))/std(X)scale = StandarScaler().fit(data)  # 训练规则X = scale.transform(data)  # 应用规则
# 离差标准化(零一标准化)# 公式:(x-min(X))/(max(X)-min(X))scale = MinMaxScaler().fit(data)  # 训练规则X = scale.transform(data)  # 应用规则X

1346146-20180831103203904-1616131538.png

clf = KMeans(n_clusters = 3, random_state = 123).fit(X)  # 聚成3类clf.labels_

1346146-20180831103512915-1161038106.png

kmeans = KMeans(n_clusters = 3, random_state = 123).fit(data)  # 用data对比一下kmeans.labels_

1346146-20180831103620315-1124350435.png

target  # 这里我们也可以再拿出原始标签相互对比

1346146-20180831103647096-234468353.png

  当然啦,先人们也是一早就想着:得找个办法来衡量一下聚类效果啊。

  于是乎,轮廓系数就诞生了。
  且看下方代码。

'''这里插入一下轮廓系数的一些知识点吧1.对于第i个对象,计算它到所属簇中所有其他对象的平均距离,记ai(体现凝聚度)2.对于第i个对象和不包含该对象的任意簇,计算该对象到给定簇中所有对象的平均距离,取最小,记bi(体现分离度)3.第i个对象的轮廓系数为si=(bi-ai)/max(ai, bi)    所以,很明显:轮廓系数取值为[-1,1],且越大越好;若值为负,即ai>bi,说明样本被分配到错误的簇中,不可接受;若值接近0,ai≈bi,表明聚类结果有重叠的情况。'''from sklearn.metrics import silhouette_score  # 轮廓系数import matplotlib.pyplot as pltsilhouettteScore = []for i in range(2,15):    kmeans = KMeans(n_clusters = i,random_state=123).fit(X) ##构建并训练模型    score = silhouette_score(X,kmeans.labels_)  # X是零一化之后的数据    silhouettteScore.append(score)plt.figure(figsize=(10,6))plt.plot(range(2,15),silhouettteScore,linewidth=1.5, linestyle="-")plt.show()

1346146-20180831103927877-2026720917.png

  嗯,到此先结束吧,等下一篇我们再继续讲构建回归模型。

转载于:https://www.cnblogs.com/WoLykos/p/9552873.html

你可能感兴趣的文章
AX2012 学习自动生成编码
查看>>
JAVA获取服务器路径的方法
查看>>
安装Jaspersoft Studio
查看>>
友盟消息推送UPush
查看>>
关于kinect的一些想法
查看>>
工作的时候 用到了 获取时间 DateTime 整理了一下
查看>>
微信公众号开发 [04] 模板消息功能的开发
查看>>
第七章 consul docker集群
查看>>
数据库分库分表
查看>>
一个可以参考的JVM内存分配
查看>>
解决页面<textarea>初始焦点显示位置不正确的问题
查看>>
Caffe2的安装
查看>>
山东济南站见面会完美收官
查看>>
[LeetCode] 33. Search in Rotated Sorted Array
查看>>
LightOJ - 1032 数位DP
查看>>
np.array()和np.mat()区别
查看>>
用汉堡包的方式评价一下自己的合作伙伴
查看>>
P1550 [USACO08OCT]打井Watering Hole
查看>>
myeclipse上SVN代码合并详细步骤图解
查看>>
gdb命令
查看>>