Skip to content

Implementation of Decision tree learning algorithm with chi-square pruning

License

Notifications You must be signed in to change notification settings

sachinbiradar9/Decision-Tree-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Decision Tree Learning

The Decision Tree Learning algorithm adopts a greedy divide-and-conquer strategy: always test the most important attribute first. This test divides the problem up into smaller subproblems that can then be solved recursively. By “most important attribute,” we mean the one that makes the most difference to the classification of an example. That way, we hope to get to the correct classification with a small number of tests, meaning that all paths in the tree will be short and the tree as a whole will be shallow.

On some problems, the Decision Tree Learning algorithm will generate a large tree when there is actually no pattern to be found. This problem is called overfitting. A technique called decision tree pruning combats overfitting. Pruning works by eliminating nodes that are not clearly relevant. We start with a full tree, as generated by Decision Tree Learning. We then look at a test node that has only leaf nodes as descendants. If the test appears to be irrelevant detecting only noise in the data then we eliminate the test, replacing it with a leaf node. We repeat this process, considering each test with only leaf descendants, until each one has either been pruned or accepted as is. In this project we use chi-square to test irrelevance of attribute

Usage

python decision.py file_name alpha

file_name contains tagged data

alpha is used for chi-square pruning

Releases

No releases published

Packages

No packages published

Languages