python下结巴中文分词

jieba中文分词的使用

import jiebasentences = ["我喜欢吃土豆","土豆是个百搭的东西","我不喜欢今天雾霾的北京", ‘costumer service’]# jieba.suggest_freq(‘雾霾’, True)# jieba.suggest_freq(‘百搭’, True)words = [list(jieba.cut(doc)) for doc in sentences]print(words)[[‘我’, ‘喜欢’, ‘吃’, ‘土豆’],[‘土豆’, ‘是’, ‘个’, ‘百搭’, ‘的’, ‘东西’],[‘我’, ‘不’, ‘喜欢’, ‘今天’, ‘雾霾’, ‘的’, ‘北京’],[‘costumer’, ‘ ‘, ‘service’]][https://github.com/fxsjy/jieba]

from:

,想象困难做出的反应,不是逃避或绕开它们,

python下结巴中文分词

相关文章:

你感兴趣的文章:

标签云: