You are currently viewing Feature selection technique for regression case

Feature selection technique for regression case

Le machine learning est devenu un outil indispensable depuis quelques années. IL est utilisé dans de nombreux domaines aussi bien dans les industries, les laboratoires de recherche que les entreprises. Comparativement aux techniques deep learning qui est considéré comme une boîte noire, les méthodes de machine learning laisse apparaître une certaine logique physique dans l’interprétation des résultats. En effet, une bonne prédiction avec des algorithmes de machine learning requière un certains nombre étapes logique permettant une réalisation de cause-effet et pouvant permettre une meilleur interprétation. Parmi les étapes, figure une qui est la clé de son interpretabilité : Feature selection. Plusieurs techniques existent afin de trouver les features les plus pertinents pour le phénomène étudie. Dans cet article, nous abordons quelques de ces techniques.

A l’issue de cet tutoriel, vous serez a mesure de:

  • Programmer la sélection de features avec scikit-learn ,
  • Comprendre pourquoi il ne faut jamais se fier à une seule technique,
  • Sélectionner les meilleurs features par combinaison de méthodes

Set Directory

import os
import pandas as pd
from numpy import *
import numpy as np
import timeit as tm
os.chdir("D:/Cours_ESI/Evaluation")

from sklearn.linear_model import (LinearRegression, Ridge, Lasso)
from sklearn.feature_selection import RFE, f_regression
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from xgboost import XGBRegressor
from sklearn.ensemble import RandomForestRegressor

Problématique

Les données de ce tutoriel sont issues des antennes météo en France. Il s’agira de prédire le taux de grippe en France par région et par semaine. Nous avons au total 19 features et 11484 observations.

Data pre-processing

train=pd.read_csv('_data/train_set.csv',delimiter=',',decimal=',',low_memory=False)
train.drop(["Unnamed: 0"],axis=1,inplace=True)
train=train.astype({"ff":"float64","t":"float64","u":"float64","n":"float64","pression":"float64","precipitation":"float64",
                   "[0-19 ans]":"float64","[20-39 ans]":"float64","[40-59 ans]":"float64","[60-74 ans]":"float64","[75 ans plus]":"float64",
                    "Prop H":"float64","Prop F":"float64"})
train=train.rename(columns={"[0-19 ans]":"0_19ans","[20-39 ans]":"20_39ans","[40-59 ans]":"40_59ans","[60-74 ans]":"60_74ans","[75 ans plus]":"75etplus","Prop H":"Prop_h","Prop F":"Prop_f"})
train=train.loc[:,["week","region_code","ff","t","u","n","pression","precipitation","Year","0_19ans","20_39ans","40_59ans","60_74ans",
        "75etplus","Prop_h","Prop_f","reqgoo1","reqgoo2","reqgoo3","TauxGrippe"]]
train.shape
(11484, 20)
train.isnull().any()
week             False
region_code      False
ff               False
t                False
u                False
n                False
pression         False
precipitation    False
Year             False
0_19ans          False
20_39ans         False
40_59ans         False
60_74ans         False
75etplus         False
Prop_h           False
Prop_f           False
reqgoo1          False
reqgoo2          False
reqgoo3          False
TauxGrippe       False
dtype: bool
sns.pairplot(train)
<seaborn.axisgrid.PairGrid at 0x1c15eb53a48>

Definir features and target

delete=["TauxGrippe"]
features= train.drop(delete,axis=1)
target=train.TauxGrippe

Definir features selection méthode

  • p power score
import ppscore as pps
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")

import matplotlib.pyplot as plt 
matrix_df = pps.matrix(train)[['x', 'y', 'ppscore']].pivot(columns='x', index='y', values='ppscore')
plt.figure(figsize=(16,12))
sns.heatmap(matrix_df, cmap=pyplot.get_cmap("coolwarm"), annot=True,fmt='.2f')
<matplotlib.axes._subplots.AxesSubplot at 0x1c17b06bb48>

Vous pouvez constatez qu’avec cette technique, un (1) seul feature semble pertinent pour le modèle ie ‘week’. Toutefois, il n’est pas évident que lui seul permet de bien prédire le taux de grippe en France d’autant plus qu’il est difficile de faire une relation de cause-effet avec le taux de grippe. Toutefois, nous sommes en alerte et pouvons dire que le taux de grippe varie en fonction des semaines.

  • Correlation coefficient
from matplotlib import pyplot
train.corr(method='kendall').style.format("{:.2}").background_gradient(cmap=pyplot.get_cmap('coolwarm'))
weekregion_codefftunpressionprecipitationYear0_19ans20_39ans40_59ans60_74ans75etplusProp_hProp_freqgoo1reqgoo2reqgoo3TauxGrippe
week1.00.0-0.0690.014-0.0270.092-0.0170.0240.95-0.12-0.32-0.220.320.2-0.0210.021-0.11-0.0019-0.0420.028
region_code0.01.0-0.0540.1-0.22-0.22-0.1-0.0370.0-0.49-0.280.0830.460.39-0.060.060.0087-0.14-0.140.07
ff-0.069-0.0541.0-0.110.0420.0950.0760.14-0.0660.140.0088-0.18-0.048-0.053-0.110.110.0360.0670.0770.068
t0.0140.1-0.111.0-0.37-0.270.046-0.022-0.0012-0.058-0.034-0.0130.0650.042-0.00740.0074-0.23-0.25-0.28-0.41
u-0.027-0.220.042-0.371.00.470.0330.24-0.0380.130.0560.056-0.14-0.0740.066-0.0660.130.140.160.19
n0.092-0.220.095-0.270.471.0-0.160.330.0970.061-0.00560.03-0.051-0.0320.045-0.0450.0470.110.110.11
pression-0.017-0.10.0760.0460.033-0.161.0-0.18-0.0180.220.071-0.19-0.096-0.130.017-0.017-0.031-0.094-0.097-0.011
precipitation0.024-0.0370.14-0.0220.240.33-0.181.00.0210.0170.00170.0079-0.0220.00170.032-0.032-0.0071-0.004-0.007-0.016
Year0.950.0-0.066-0.0012-0.0380.097-0.0180.0211.0-0.13-0.34-0.230.340.21-0.0220.022-0.110.0073-0.0350.044
0_19ans-0.12-0.490.14-0.0580.130.0610.220.017-0.131.00.61-0.38-0.74-0.750.17-0.170.000940.130.14-0.036
20_39ans-0.32-0.280.0088-0.0340.056-0.00560.0710.0017-0.340.611.0-0.086-0.82-0.830.26-0.260.0260.0360.057-0.013
40_59ans-0.220.083-0.18-0.0130.0560.03-0.190.0079-0.23-0.38-0.0861.00.0880.190.12-0.120.086-0.09-0.086-0.025
60_74ans0.320.46-0.0480.065-0.14-0.051-0.096-0.0220.34-0.74-0.820.0881.00.79-0.20.2-0.02-0.096-0.120.038
75etplus0.20.39-0.0530.042-0.074-0.032-0.130.00170.21-0.75-0.830.190.791.0-0.280.28-0.0011-0.048-0.0640.021
Prop_h-0.021-0.06-0.11-0.00740.0660.0450.0170.032-0.0220.170.260.12-0.2-0.281.0-1.0-0.027-0.086-0.0940.007
Prop_f0.0210.060.110.0074-0.066-0.045-0.017-0.0320.022-0.17-0.26-0.120.20.28-1.01.00.0270.0860.094-0.007
reqgoo1-0.110.00870.036-0.230.130.047-0.031-0.0071-0.110.000940.0260.086-0.02-0.0011-0.0270.0271.00.660.570.32
reqgoo2-0.0019-0.140.067-0.250.140.11-0.094-0.0040.00730.130.036-0.09-0.096-0.048-0.0860.0860.661.00.890.3
reqgoo3-0.042-0.140.077-0.280.160.11-0.097-0.007-0.0350.140.057-0.086-0.12-0.064-0.0940.0940.570.891.00.32
TauxGrippe0.0280.070.068-0.410.190.11-0.011-0.0160.044-0.036-0.013-0.0250.0380.0210.007-0.0070.320.30.321.0

Contrairement à la technique precedente, ici nous avons trois nouveaux features qui semblent pertinents pour le modèle. Ces features sont completement differents avec celui detecté par la première méthode. Ainsi, il devient assez difficile de prendre une decision sachant que l’intersection des deux resultats est nulle.

  • technique ensembliste
ranks = {}
def ranking(ranks, names, order=1):
    ranks = MinMaxScaler().fit_transform(order*np.array([ranks]).T).T[0]
    ranks = map(lambda x: round(x,2), ranks)
    return dict(zip(names, ranks))
colnames = features.columns
#-- Construct our decision Tree 
from sklearn.feature_selection import RFE, f_regression
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor()
Dtree = RFE(tree, n_features_to_select=1, verbose =3 )
Dtree.fit(features,target)
ranks["Dtree"] = ranking(list(map(float, Dtree.ranking_)), colnames, order=-1)
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
Fitting estimator with 5 features.
Fitting estimator with 4 features.
Fitting estimator with 3 features.
Fitting estimator with 2 features.
#-- Construct our ExtraTreesClassifier
from sklearn.ensemble import ExtraTreesRegressor
Xtree = ExtraTreesRegressor()
EXtree = RFE(Xtree, n_features_to_select=1, verbose =3 )
EXtree.fit(features,target)
ranks["EXtree"] = ranking(list(map(float, EXtree.ranking_)), colnames, order=-1)
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
Fitting estimator with 5 features.
Fitting estimator with 4 features.
Fitting estimator with 3 features.
Fitting estimator with 2 features.
#construct our RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
RF = RandomForestRegressor()
RandF = RFE(RF, n_features_to_select=1, verbose =3 )
RandF.fit(features,target)
ranks["RandF"] = ranking(list(map(float, RandF.ranking_)), colnames, order=-1)
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
Fitting estimator with 5 features.
Fitting estimator with 4 features.
Fitting estimator with 3 features.
Fitting estimator with 2 features.
#construct our RandomForestClassifier
from sklearn.ensemble import AdaBoostRegressor
Adb = AdaBoostRegressor()
AdaBoost = RFE(Adb, n_features_to_select=1, verbose =3 )
AdaBoost.fit(features,target)
ranks["AdaBoost"] = ranking(list(map(float, AdaBoost.ranking_)), colnames, order=-1)
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
Fitting estimator with 5 features.
Fitting estimator with 4 features.
Fitting estimator with 3 features.
Fitting estimator with 2 features.
#construct our GradientBoostingClassifier
from sklearn.ensemble import GradientBoostingRegressor
GBT = GradientBoostingRegressor(n_estimators=100, learning_rate=1.0,max_depth=1, random_state=0)
GradBoost = RFE(GBT, n_features_to_select=1, verbose =3 )
GradBoost.fit(features,target)
ranks["GradBoost"] = ranking(list(map(float, GradBoost.ranking_)), colnames, order=-1)
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
Fitting estimator with 5 features.
Fitting estimator with 4 features.
Fitting estimator with 3 features.
Fitting estimator with 2 features.
# Construct our Linear Regression model
lr = LinearRegression(normalize=True)
lr.fit(features,target)

#stop the search when only the last feature is left
LinReg = RFE(lr, n_features_to_select=1, verbose =3 )
LinReg.fit(features,target)
ranks["LinReg"] = ranking(list(map(float, LinReg.ranking_)), colnames, order=-1)
Fitting estimator with 19 features.
Fitting estimator with 18 features.
Fitting estimator with 17 features.
Fitting estimator with 16 features.
Fitting estimator with 15 features.
Fitting estimator with 14 features.
Fitting estimator with 13 features.
Fitting estimator with 12 features.
Fitting estimator with 11 features.
Fitting estimator with 10 features.
Fitting estimator with 9 features.
Fitting estimator with 8 features.
Fitting estimator with 7 features.
Fitting estimator with 6 features.
Fitting estimator with 5 features.
Fitting estimator with 4 features.
Fitting estimator with 3 features.
Fitting estimator with 2 features.
# Using Ridge 
ridge = Ridge(alpha = 7)
ridge.fit(features,target)
ranks['Ridge'] = ranking(np.abs(ridge.coef_), colnames)

# Using Lasso
lasso = Lasso(max_iter=100000,alpha=.05)
lasso.fit(features,target)
ranks["Lasso"] = ranking(np.abs(lasso.coef_), colnames)
xgb = XGBRegressor()
xgb.fit(features,target)
ranks["Xgbt"] = ranking(xgb.feature_importances_, colnames)
# Create empty dictionary to store the mean value calculated from all the scores
r = {}
for name in colnames:
    r[name] = round(np.mean([ranks[method][name] 
                             for method in ranks.keys()]), 2)
methods = sorted(ranks.keys())
ranks["Mean"] = r
methods.append("Mean")
Ridge=[ranks['Ridge'][name] for name in colnames]
Ridge=pd.DataFrame(Ridge,columns=['Ridge'])

LinReg=[ranks['LinReg'][name] for name in colnames]
LinReg=pd.DataFrame(LinReg,columns=['LinReg'])

Xgbt=[ranks['Xgbt'][name] for name in colnames]
Xgbt=pd.DataFrame(Xgbt,columns=['Xgbt'])

Dtree=[ranks['Dtree'][name] for name in colnames]
Dtree=pd.DataFrame(Dtree,columns=['Dtree'])

EXtree=[ranks['EXtree'][name] for name in colnames]
EXtree=pd.DataFrame(EXtree,columns=['EXtree'])

RandF=[ranks['RandF'][name] for name in colnames]
RandF=pd.DataFrame(RandF,columns=['RandF'])

AdaBoost=[ranks['AdaBoost'][name] for name in colnames]
AdaBoost=pd.DataFrame(AdaBoost,columns=['AdaBoost'])

GradBoost=[ranks['GradBoost'][name] for name in colnames]
GradBoost=pd.DataFrame(GradBoost,columns=['GradBoost'])

Mean=[ranks['Mean'][name] for name in colnames]
Mean=pd.DataFrame(Mean,columns=['Mean'])


cols=pd.DataFrame(colnames,columns=['Features'])
ranking_score=pd.concat([cols,Ridge,LinReg,Xgbt,Dtree,EXtree,RandF,AdaBoost,GradBoost,Mean],axis=1)

ranking_score.sort_values(by="Mean",ascending=False,inplace=True)
ranking_score
FeaturesRidgeLinRegXgbtDtreeEXtreeRandFAdaBoostGradBoostMean
18reqgoo30.190.611.000.890.890.890.940.940.73
3t0.030.560.120.940.940.941.000.890.61
0week0.010.330.151.001.001.000.891.000.60
1020_39ans0.370.940.040.390.330.390.560.560.50
16reqgoo10.030.500.090.500.830.440.720.830.44
1region_code0.000.170.070.830.610.780.670.780.43
90_19ans0.010.830.160.610.780.560.830.000.42
6pression0.000.000.040.720.720.830.610.720.40
8Year1.000.390.000.110.670.220.110.060.40
2ff0.020.440.020.440.390.500.780.500.34
4u0.010.220.020.780.440.720.060.670.33
1375etplus0.221.000.080.220.170.280.500.280.31
5n0.000.060.020.560.280.610.390.610.28
7precipitation0.000.280.030.670.500.670.280.110.28
1260_74ans0.290.890.030.170.220.330.000.220.24
1140_59ans0.060.780.020.280.110.110.440.170.22
17reqgoo20.000.110.080.330.560.170.220.440.21
15Prop_f0.040.720.000.060.060.060.170.390.17
14Prop_h0.040.670.040.000.000.000.330.330.16
list(ranking_score[ranking_score['Mean']&gt;0.5].Features)
['reqgoo3', 't', 'week']
# Put the mean scores into a Pandas dataframe
meanplot = pd.DataFrame(list(r.items()), columns= ['Feature','Mean Ranking'])

# Sort the dataframe
meanplot = meanplot.sort_values('Mean Ranking', ascending=False)
# Let's plot the ranking of the features
import warnings
warnings.filterwarnings("ignore")
plot=sns.factorplot(x="Mean Ranking", y="Feature", data = meanplot, kind="bar", 
               size=14, aspect=1.9, palette='coolwarm')

Cette dernière technique est en effet une combinaison de plusieurs méthodes permet de donner en moyenne les résulats et de tirer profit d’un grand nombre de méthodes. Elle reste heuristique dans le sens ou vous devez choisir une valeur de seuil mais elle reste quand bien même plus sûr afin d’éviter de supprimer des features pertinents ou d’inclure des features moins importants.

Armel

ML Engineer

Laisser un commentaire