本地情报库中共有300多万条黑产告警域名,目前爬取了11万条,提取出4.7万条有效域名(无法访问、公安提醒诈骗、返回错误、无标题及超链接等的域名判定为无效),以及对应的标题、超链接名等信息存入了csv文件,共110M大小,作为搭建测试模型的初步语料库。
在爬取了数据后,由于原始数据中含有大量的非法字符及无效符号等信息(有些网站title与超链接均为图片,则提取出的是乱码)如’抱歉,站点已暂停‘ ’���‘’404 Not Found‘等,需要对其中的这些数据进行清洗。
有些网站超链接中含有大量\t,\n等符号,需要进一步去除噪声。
最终得到42629条数据。
以下为数据清洗过程:
- 删除含有�非法字符的行:
import pandas as pd
import string
data = pd.read_csv('feature_ori_9.csv', delimiter='|', encoding='utf_8_sig', error_bad_lines=False)
del_list = []
for index in data.index:
if str(data['title'][index]).find('�') != -1 or str(data['features_ori'][index]).find('�') != -1:
print('delete %s content' % data['url'][index])
del_list.append(index)
data_new = data.drop(del_list, axis=0)
data_new.to_csv('./feature_ori_9_1.csv', index=False, header=False, encoding='utf_8_sig', sep='|')
print("++++++++++++++++++delete � finished+++++++++++++++")
import pandas as pd
import string
data = pd.read_csv('feature_ori_9_2.csv', delimiter='|', encoding='utf_8_sig', error_bad_lines=False)
del_list = []
for index in data.index:
if str(data['title'][index]).find('404 Not Found') != -1 or str(data['features_ori'][index]).find('�') != -1:
print('delete %s content' % data['url'][index])
del_list.append(index)
data_new = data.drop(del_list, axis=0)
data_new.to_csv('./feature_ori_9_3.csv', index=False, header=False, encoding='utf_8_sig', sep='|')
print("++++++++++++++++++delete 404 finished+++++++++++++++")
import pandas as pd
import string
def is_chinese(string):
"""
检查整个字符串是否包含中文
:param string: 需要检查的字符串
:return: bool
"""
for ch in string:
if u'\u4e00' <= ch <= u'\u9fff':
return True
return False
data = pd.read_csv('feature_ori_9_3.csv', delimiter='|', encoding='utf_8_sig', error_bad_lines=False)
del_list = []
for index in data.index:
if is_chinese(str(data['title'][index])) == False and is_chinese(str(data['features_ori'][index])) == False:
print('delete %s content' % data['url'][index])
del_list.append(index)
data_new = data.drop(del_list, axis=0)
data_new.to_csv('./feature_ori_9_4.csv', index=False, header=False, encoding='utf_8_sig', sep='|')
print("++++++++++++++++++delete English finished+++++++++++++++")
import pandas as pd
import string
data = pd.read_csv('feature_ori_9_4.csv', delimiter='|', encoding='utf_8_sig', error_bad_lines=False)
data_new = data.drop(['url'], axis=1)
data_new.to_csv('./feature_ori_9_5.csv', index=False, header=False, encoding='utf_8_sig', sep='|')
print("++++++++++++++++++delete url finished+++++++++++++++")
由于中文词法与英文不同,这里我们先进行中文分析,因此先去除了只含英文内容的url
进一步,url为无效信息,因此我们只保留title与features_ori,进一步精简数据。