《机器学习实战》学习笔记:基于朴素贝叶斯的垃圾邮件过滤

概率是许多机器学习算法的基础,在前面生成决策树的过程中使用了一小部分关于概率的知识,即统计特征在数据集中取某个特定值的次数,然后除以数据集的实例总数,得到特征取该值的概率。

之前的基础实验中简单实现了朴素贝叶斯分类器,并正确执行了文本分类,这一节将贝叶斯运用到实际场景,垃圾邮件过滤这一实际应用。

实例:使用朴素贝叶斯过滤垃圾邮件

在上一节:中,使用了简单的文本文件,并从中提取了字符串列表。这个例子中,我们将了解朴素贝叶斯的一个最著名的应用:电子邮件垃圾过滤。首先看一下如何使用通用框架来解决问题:

1.生成贝叶斯分类器

在上一节已实现,在实现朴素贝叶斯的两个应用前,需要用到之前的分类器训练函数,完整的代码如下:

# -*- coding: utf-8 -*-“””Created on Tue Sep 08 16:12:55 2015@author: Administrator”””from numpy import *:postingList=[[‘my’, ‘dog’, ‘has’, ‘flea’, ‘problems’, ‘help’, ‘please’],[‘maybe’, ‘not’, ‘take’, ‘him’, ‘to’, ‘dog’, ‘park’, ‘stupid’],[‘my’, ‘dalmation’, ‘is’, ‘so’, ‘cute’, ‘I’, ‘love’, ‘him’],[‘stop’, ‘posting’, ‘stupid’, ‘worthless’, ‘garbage’],[‘mr’, ‘licks’, ‘ate’, ‘my’, ‘steak’, ‘how’, ‘to’, ‘stop’, ‘him’],[‘quit’, ‘buying’, ‘worthless’, ‘dog’, ‘food’, ‘stupid’]]listClass = [0, 1, 0, 1, 0, 1] # 1代表存在侮辱性的文字,0代表不存在return postingList, listClass:vocList = set([])for doc in data:vocList = vocList | set(doc) # 两集合的并集return list(vocList):returnVec = [0]*len(vocList) # 创建和vocabList一样长度的全0列表for word in inputStream:if word in vocList: # 针对某段words进行处理returnVec[vocList.index(word)] = :print “The word :%s is not in the vocabulary!” % wordreturn returnVec:numTrainDocs = len(trainMatrix)numWords = len(trainMatrix[0])pBase = sum(classLabel) / float(numTrainDocs)# The following Settings aim at avoiding the probability of 0p0Num = ones(numWords)p1Num = ones(numWords)p0Denom = 2.0p1Denom = 2.0for i in range(numTrainDocs):if classLabel[i] == 1:p1Num += trainMatrix[i]p1Denom += sum(trainMatrix[i])else:p0Num += trainMatrix[i]p0Denom += sum(trainMatrix[i])p0 = log(p0Num / p0Denom)p1 = log(p1Num / p1Denom)return p0, p1, pBase:numTrainDocs = len(trainMatrix)numWords = len(trainMatrix[0])pBase = sum(classLabel) / float(numTrainDocs)# The following Settings aim at avoiding the probability of 0p0Num = ones(numWords)p1Num = ones(numWords)p0Denom = 2.0p1Denom = 2.0for i in range(numTrainDocs):if classLabel[i] == 1:p1Num += trainMatrix[i]p1Denom += sum(trainMatrix[i])else:p0Num += trainMatrix[i]p0Denom += sum(trainMatrix[i])p0 = log(p0Num / p0Denom)p1 = log(p1Num / p1Denom)return p0, p1, pBasetrainMat = []for doc in loadData:trainMat.append(detectInput(vocList, doc))p0,p1,pBase = trainNaiveBayes(trainMat, dataLabel):p0res = sum(vec2Classify * p0) + log(1 – pBase)p1res = sum(vec2Classify * p1) + log(pBase)if p1res > p0res:::loadData, classLabel = loadDataSet()vocList = createNonRepeatedList(loadData)trainMat = []for doc in loadData:trainMat.append(detectInput(vocList, doc))p0, p1, pBase = trainNaiveBayes(array(trainMat), array(classLabel))testInput = [‘love’, ‘my’, ‘dalmation’]thisDoc = array(detectInput(vocList, testInput))print testInput, ‘the classified as: ‘, naiveBayesClassify(thisDoc, p0, p1, pBase)testInput = [‘stupid’, ‘garbage’]thisDoc = array(detectInput(vocList, testInput))print testInput, ‘the classified as: ‘, naiveBayesClassify(thisDoc, p0, p1, pBase)

2.准备数据:切分文本

首先,编写一个Python函数textSplit(),用来对所有的email文件进行解析并把一篇文章分解为一个个的单词,,这里将邮件分为两种,正常的邮件放在路径/email/ham/下,垃圾邮件放在/email/spam/下。以下的代码就是读入文本数据,然后切分,得到词向量,然后将词向量中的词都转换成小写,并把长度大于2的字符串提取出来,写入到文本文件中去,在切分文本的过程中使用了一些技巧,包括正则表达式、将所有字符串转换成小写(.lower())等等。

: # 正则表达式进行文本解析import relistOfTokens = re.split(r’\W*’, bigString)return[tok.lower() for tok in listOfTokens if len(tok) > 2]

3.测试算法:使用朴素贝叶斯进行交叉验证

爱人,却不一定能够听懂。他们听见的,多是抱怨不休,心烦意乱。

《机器学习实战》学习笔记:基于朴素贝叶斯的垃圾邮件过滤

相关文章:

你感兴趣的文章:

标签云: