Abstract
There are various applications which have a huge database. All databases maintain log files that keep records of database changes. This can include tracking various user events. Apache Hadoop can be used for log processing at scale. Log files have become a standard part of large applications and are essential in operating systems, computer networks and distributed systems. Log files are often the only way to identify and locate an error in software, because log file analysis is not affected by any time-based issues known as probe effect. This is opposite to analysis of a running program, when the analytical process can interfere with time-critical or resource critical conditions within the analyzed program. Log files are often very large and can have complex structure. Although the process of generating log files is quite simple and straightforward, log file analysis could be a tremendous task that requires enormous computational resources, long time and sophisticated procedures. This often leads to a common situation, when log files are continuously generated and occupy valuable space on storage devices, but nobody uses them and utilizes enclosed information. The overall goal of this project is to design a generic log analyzer using hadoop map-reduce framework. This generic log analyzer can analyze different kinds of log files such as- Email logs, Web logs, Firewall logs Server logs, Call data logs.