数据准备 :变量筛选-实战篇 (2)

上述代码,首先将数据集中的连续变量进行等频分箱离散化,然后计算各分组中的正、负样本的比例以及占其总体的比例,方便后续预测能力指标的计算。调用:

binStatisticResDf = binStatistic(dataset, continuousColList, discreteColList, targetCol, 10)

结果如下:

数据准备 :变量筛选-实战篇

(1)信息增益 ##信息增益计算 import math def entropyVal(prob): if (prob == 0 or prob == 1): entropy = 0 else: entropy = -(prob * math.log(prob,2) + (1-prob) * math.log((1-prob),2)) return entropy def gain(df,cont,disc,tag,bins): binDf = binStatistic(df,cont,disc,tag,bins) binDf['binAllRto'] = binDf['binAllCnt'] / binDf['allCnt'] #计算各区间样本占比 binDf['binEnty'] = binDf['binAllRto'] * binDf['binPosRto'].apply(entropyVal) #计算各区间信息熵 binDf['allEnty'] = binDf['posRto'].apply(entropyVal) #计算总体信息熵 tmpSer = binDf['allEnty'].groupby(binDf['colName']).mean() - binDf['binEnty'].groupby(binDf['colName']).sum() #计算信息增益=总体信息熵-各区间信息熵加权和 tmpSer.name = 'gain' resSer = tmpSer.sort_values(ascending=False) #按信息增益大小降序重排 return resSer gainSer = gain(dataset, continuousColList, discreteColList, targetCol, 10)

结果如下:

gainSer Out[11]: colName worst perimeter 0.684679 worst radius 0.660968 worst area 0.660008 worst concave points 0.638077 mean concave points 0.624494 mean perimeter 0.560532 mean area 0.556175 mean radius 0.549344 mean concavity 0.522874 area error 0.511074 worst concavity 0.463328 radius error 0.365562 perimeter error 0.355755 worst compactness 0.321361 mean compactness 0.312030 concavity error 0.217127 concave points error 0.198744 mean texture 0.187825 worst texture 0.182411 worst smoothness 0.152196 worst symmetry 0.147642 compactness error 0.135060 mean smoothness 0.115767 mean symmetry 0.098459 worst fractal dimension 0.098175 mean fractal dimension 0.042941 fractal dimension error 0.042878 texture error 0.017587 smoothness error 0.016914 symmetry error 0.016343 Name: gain, dtype: float64 (2)基尼指数 ##基尼指数计算 def giniVal(prob): gini = 1 - pow(prob,2) - pow(1-prob,2) return gini def gini(df,cont,disc,tag,bins): binDf = binStatistic(df,cont,disc,tag,bins) binDf['binAllRto'] = binDf['binAllCnt'] / binDf['allCnt'] #计算各区间样本占比 binDf['binGini'] = binDf['binAllRto'] * binDf['binPosRto'].apply(giniVal) #计算各区间信息熵 binDf['allGini'] = binDf['posRto'].apply(giniVal) #计算总体信息熵 tmpSer = binDf['allGini'].groupby(binDf['colName']).mean() - binDf['binGini'].groupby(binDf['colName']).sum() #计算信息增益=总体信息熵-各区间信息熵加权和 tmpSer.name = 'gini' resSer = tmpSer.sort_values(ascending=False) #按信息增益大小降序重排 return resSer giniSer = gini(dataset, continuousColList, discreteColList, targetCol, 10)

结果如下:

giniSer Out[12]: colName worst perimeter 0.354895 worst radius 0.342576 worst area 0.341825 worst concave points 0.335207 mean concave points 0.329700 mean area 0.301404 mean perimeter 0.300452 mean radius 0.297503 mean concavity 0.289159 area error 0.276011 worst concavity 0.256282 radius error 0.207248 perimeter error 0.197484 worst compactness 0.184931 mean compactness 0.182082 concavity error 0.118407 concave points error 0.114959 mean texture 0.110607 worst texture 0.106525 worst smoothness 0.092690 worst symmetry 0.091039 compactness error 0.078588 mean smoothness 0.067543 worst fractal dimension 0.061877 mean symmetry 0.059676 mean fractal dimension 0.027288 fractal dimension error 0.026794 texture error 0.010803 smoothness error 0.010417 symmetry error 0.010157 Name: gini, dtype: float64 (3)区分度 ##区分度计算 def lift(df,cont,disc,tag,bins): binDf = binStatistic(df,cont,disc,tag,bins) binDf['binLift'] = binDf['binPosRto'] / binDf['posRto'] #区间提升度=区间正样本比例/总体正样本比例 tmpSer = binDf['binLift'].groupby(binDf['colName']).max() #变量区分度=max(区间提升度) tmpSer.name = 'lift' resSer = tmpSer.sort_values(ascending=False) #按区分度大小降序重排 return resSer liftSer = lift(dataset, continuousColList, discreteColList, targetCol, 10)

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wspwjg.html