learn做文本分类

文本挖掘的paper没找到统一的benchmark,只好自己跑程序,走过路过的前辈如果知道20newsgroups或者其它好用的公共数据集的分类(最好要所有类分类结果,全部或取部分特征无所谓)麻烦留言告知下现在的benchmark,万谢!

嗯,说正文。20newsgroups官网上给出了3个数据集,这里我们用最原始的20news-19997.tar.gz。

分为以下几个过程:

加载数据集提feature分类聚类

说明:scipy官网上有参考,但是看着有点乱,而且有bug。本文中我们分块来看。

Environment:Python 2.7 + Scipy (scikit-learn)

1.加载数据集

从20news-19997.tar.gz下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。

[python] view plain

#first extract the 20 news_group dataset to /scikit_learn_datafrom sklearn.datasets import fetch_20newsgroups#all categories#newsgroup_train = fetch_20newsgroups(subset='train')#part categoriescategories = ['comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x'];newsgroup_train = fetch_20newsgroups(subset = 'train',categories = categories);

可以检验是否load好了:

[python] view plain

#print category namesfrom pprint import pprintpprint(list(newsgroup_train.target_names))结果:

[‘comp.graphics’,’comp.os.ms-windows.misc’,’comp.sys.ibm.pc.hardware’,’comp.sys.mac.hardware’,’comp.windows.x’]

2. 提feature:

刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform

Method 1. HashingVectorizer,,规定feature个数

[python] view plain

#newsgroup_train.data is the original documents, but we need to extract the #feature vectors inorder to model the text datafrom sklearn.feature_extraction.text import HashingVectorizervectorizer = HashingVectorizer(stop_words = 'english',non_negative = True,n_features = 10000)fea_train = vectorizer.fit_transform(newsgroup_train.data)fea_test = vectorizer.fit_transform(newsgroups_test.data);#return feature vector 'fea_train' [n_samples,n_features]print 'Size of fea_train:' + repr(fea_train.shape)print 'Size of fea_train:' + repr(fea_test.shape)#11314 documents, 130107 vectors for all categoriesprint 'The average feature sparsity is {0:.3f}%'.format(fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);结果:

Size of fea_train:(2936, 10000)Size of fea_train:(1955, 10000)The average feature sparsity is 1.002%

因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。

**************************************************************************************************************************

上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:

Method 2.CountVectorizer+TfidfTransformer

让两个CountVectorizer共享vocabulary:

[python] view plain

#—————————————————-#method 1:CountVectorizer+TfidfTransformerprint '*************************\nCountVectorizer+TfidfTransformer\n*************************'from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformercount_v1= CountVectorizer(stop_words = 'english', max_df = 0.5);counts_train = count_v1.fit_transform(newsgroup_train.data);print "the shape of train is "+repr(counts_train.shape)count_v2 = CountVectorizer(vocabulary=count_v1.vocabulary_);counts_test = count_v2.fit_transform(newsgroups_test.data);print "the shape of test is "+repr(counts_test.shape)tfidftransformer = TfidfTransformer();tfidf_train = tfidftransformer.fit(counts_train).transform(counts_train);tfidf_test = tfidftransformer.fit(counts_test).transform(counts_test);

结果:

*************************

CountVectorizer+TfidfTransformer*************************the shape of train is (2936, 66433)the shape of test is (1955, 66433)

Method 3.TfidfVectorizer

让两个TfidfVectorizer共享vocabulary:

爱情纯属天性,不用思考。你不能为爱而爱,

learn做文本分类

相关文章:

你感兴趣的文章:

标签云: