仅为个人笔记使用
The basic question
- how to design a combiner?
- how to generate base classifiers?
combination
combiner types
- fixed rules based on crisp labels or confidences[estimated posterior probabilities]
- special trained rules based on classifier condidences
- general trained rules interpreting based-classifier outputs as features
p ( f , ϕ ∣ ω ) = p ( f ∣ ω ) p ( ϕ ∣ ω ) p(f,\phi| \omega) = p(f|\omega)p(\phi|\omega) p(f,ϕ∣ω)=p(f∣ω)p(ϕ∣ω)
don not need to measure the full parameters in one go,
reduce the calculation
combination —> regularization
combining classifier offen leads to regularization effect
Bagging[ Bootstrap Aggregating]
- select a training set size m ′ < m m'<m m′<m
- select at random n n n subsets of s ′ s' s′ training objects[originally: bootstrap]
- train a classifier[originally : decision three]
- combine [orgina: majority vote]
- stabilize volatile classifiers
boosting
- initialize all objects with an euqal weight
- select a training set size m ′ < m m'<m m′<m according to the object weights
- train a weak classifier
- increase the weights of the erroneously classified objects
- repeat as long as needed
- combine
- inprove performance of weak classifiers
Adaboost
- sample training set accoring to set of object weights[initialy equal]
- use it for training imple [weak] classifier ω i \omega_{i} ωi
- classify entire data set, using weights, to get error estimate ξ i \xi_{i} ξi
- store classifier weights a i = 0.5 l o g ( ( 1 − ξ i ) / ξ i ) a_{i}= 0.5 log((1-\xi_{i})/\xi_{i}) ai=0.5log((1−ξi)/ξi)
- multiply weights of erroneously classified objects with e x p ( a i ) exp(a_{i}) exp(ai) and correctly classified objects with e x p ( − a i ) exp(-a_{i}) exp(−ai)
- goto 1 as long as needed
- Final classifier: weighted voting with weights a i a_{i} ai