Friday, 6 January 2017

Optimal Processing Keyword Cover Search In Spatial Database.

Vol. 4  Issue 2
Year: 2016
Issue:Jun-Aug
Title:Optimal Processing Keyword Cover Search In Spatial Database.
Author Name:K.S. Dadapeer , Shaik Salam and T.V. Rao
Synopsis:
Keywords are used in different approaches. In text editing and DBMS (Data Base Management System), a keyword is used to find certain records. In programming languages, a keyword is the reserved word in a program since it has a meaning. It is also used to search the relevant web pages through a search engine. In spatial database having a set of objects, each object is associated with a keyword such as hotels, restaurants, etc. Here the issue is closest keyword cover search also called as keyword cover, which covers a set of query keyword and minimum inter object distance. At present days we are giving the importance of keyword rating to make a better decision. Later, generic version of the keyword cover called best keyword cover, which the objects cover the inter object distance and keyword rating. Baseline algorithm simulates closest keywords search, combining objects from query keyword to generate a candidate key. In baseline algorithm, the performance decreases because, as the number of query keyword increases the candidates increases, and to overcome this drawback, a scalable algorithm called Keyword Nearest Neighbour Expansion (K-NNE) variation was introduced with previous approach. The keyword Nearest Neighbour Expansion (K-NNE) gradually decreases the candidate key. In previous work with minimal tree cover, the query keyword in future finds the sub graph rather than minimal tree, which is more informative to the users. Nodes in a tree are close to each other, and the other nodes far away from each other has a weak relationship on content nodes in a tree. All keywords having same importance, ie, the result containing strong relationship that is the shortest distance between each pair of nodes selected over weak relationship. Tree base method content and non-content nodes in the tree during the results, hundreds and thousands of nodes in input graph have high time and memory complexity.

A Hybrid Clustering With Side Information In Text Mining

Vol. 4  Issue 2
Year: 2016
Issue:Jun-Aug
Title:A Hybrid Clustering With Side Information In Text Mining
Author Name:T. Naveen Kumar and Ramadevi
Synopsis:
In many online forms, lots of side-data or meta information is available. This Meta data consists of different kinds, for example the links present in the file, the user-access performance from blogs, the document origin information, and also other attributes which are surrounded into the content or text document. For the clustering purposes, these Meta attributes contain large amount of information. The Meta data adds the noise to the mining process. So, it is difficult to incorporate into this process. The existing COATES algorithm is created for clustering approach. But, in COATES the kmeans algorithm creates some problems as it is unable to get the quality of clusters better. Because, it leads to the wrong number of clusters, different sized clusters, and empty clusters and outliers. The authors have proposed a Hybrid-COATES algorithm which combines CURE with COATES algorithm for an efficiency and effective clustering approach. To mine text data with the help of Meta data or side information, CURE algorithm is more capable than kmeans algorithm. Hybrid-COATES method is used to attempt to the scalability problem and improve the quality of clustering results.

Effective Bug Triage With Software Data Reduction Techniques Using Clustering Mechanism

Vol. 4  Issue 2
Year: 2016
Issue:Jun-Aug
Title:Effective Bug Triage With Software Data Reduction Techniques Using Clustering Mechanism
Author Name:R. Nishanth Prabhakar and K.S. Ranjith
Synopsis:
Bug triage is the most important step in handling the bugs, which occur during a software process. In manual bug triaging process, the received bug is assigned to a tester or a developer by a triager. Hence, the bugs are received in huge numbers, it is difficult to carry out the manual bug triaging process, and it consumes much resources, both in the form of man hours and the economy. Hence, there is a necessity to reduce the exploitation of resources. Hence, a mechanism is proposed which facilitates much better and efficient triaging process by reducing the size of the bug data sets. The mechanism here involves techniques like clustering techniques and selection techniques. This approach proved more efficient than the manual bug triaging process when compared with bug data sets which were retrieved from the open source bug repository called bugzilla.

Detection of Anomaly Based Application Layer DDoS Attacks Using Machine Learning Approaches

Vol. 4  Issue 2
Year: 2016
Issue:Jun-Aug
Title:Detection of Anomaly Based Application Layer DDoS Attacks Using Machine Learning Approaches
Author Name:M.S.P.S. Vani Nidhi and K. Munivara Prasad
Synopsis:
DDoS (Distributed Denial of Service) attacks are a major threat to security. These attacks are mainly originated from the network layer or application layer of the compromised systems that are connected to the network. The main intention of these DDoS attacks is to deny or disrupt the services or network bandwidth of the victim or target system. Now-a-days, application layer DDoS attacks are posing a serious threat to the Internet. Differentiating between the legitimate/normal and malicious traffic is a very difficult task. A lot of research work has been done in detecting the attacks using machine learning approaches. In this paper, the authors have proposed the machine learning metrics for detecting the application layer DDoS attacks.

Installation Of Alchemi.Net In Computational Grid

Vol. 4  Issue 2
Year: 2016
Issue:Jun-Aug
Title:Installation Of Alchemi.Net In Computational Grid
Author Name:Neeraj Rathore
Synopsis:
The Grid is an increasingly growing virtualization and distribution environment for sharing the resources. This situation imposes greater demands on many techniques, i.e, Load balancing, securities, etc., and fault-tolerance capabilities are one of the demanding criteria. In the fault tolerance mechanisms, different types of schemes and techniques are there, which are beneficial to make the Grid fault tolerant. Checkpointing is one of the methods of fault tolerance. Till now, many middleware/Software in the Grid/Cloud environment is not fully fault tolerant. Different middleware has different levels of fault tolerance. Some of the middleware like Alchemi.NET does not have a robust fault tolerance mechanism. Therefore, in this research work Alchemi.NET has been chosen and a scenario of the checkpointing algorithm has been shown to tolerate the fault in the Grid. The main objective of the paper is to know the installation/computation step of Alchemi.NET for simulation in the field of Grid.

PBI2D- Priority Based Intelligent Imbalanced Data Classification of Health Care data with Missing Values

Vol. 4  Issue 1
Year: 2016
Issue:Mar-May
Title:PBI2D- Priority Based Intelligent Imbalanced Data Classification of Health Care data with Missing Values
Author Name:A. Anuradha and Saradhi Varma 
Synopsis:
These researchers trace a few of the modern development in the field of learning imbalanced data. Review approaches were adopted for this problem and it identifies challenges and points out potential directions in this comparatively new field. In medical province, data features frequently contain missing values. This can make grave bias in the logical modeling. Characteristic standard data mining methods often produce poor performance measures. In this paper, the authors proposed a new method to concurrently classify large datasets and decrease the belongings of missing values. The proposed method is based on a multilevel structure of the cost-sensitive ASVM (Adaptive Support Vector Machine) and the probable maximization charge method for missing principles, which learn the breakdown analysis of excessive dataset. Thus the authors developed the PBI2D- (Priority Based Intelligent Imbalanced Data Classification) of HealthCare data with missing values to produce contrast classification results of multilevel ASVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications. This method produces fast, more accurate and robust classification results.

E-MDDR Scheduling Algorithm For Input-Queued Switches

Vol. 4  Issue 1
Year: 2016
Issue:Mar-May
Title:E-MDDR Scheduling Algorithm For Input-Queued Switches
Author Name:K. Navaz and Kannan Balasubramanian 
Synopsis:
The increasing use of multicast applications on the internet, has an urgent need for the high speed router/switches, which handles the multicast traffic efficiently. The core component of the router is the switch fabric. The scheduling algorithm which configures the switch fabric to arbitrate and transfer the cells between the input and output ports. Most of the existing multicast scheduling algorithms performed well under uniform traffic, but failed to achieve maximum throughput under non-uniform traffic. In this paper, the authors have proposed a multicast scheduling algorithm called Enhanced Multicast Due-Date Round Robin Scheduling Algorithm (E-MDDR), which got the improved throughput compared with Multicast Due-Date Round Robin Scheduling algorithm (MDDR). Since E-MDDR computes the residue of the cells waiting until M cell times and those cells are declared as the emergency cells, which are transferred between every M time slot for an NxM switch. So that it achieves the improved throughput and minimum delay under non-uniform Bernoulli and bursty traffic patterns.

Prediction of Commodities Market by Using Data Mining Technique

Vol. 4  Issue 1
Year: 2016
Issue:Mar-May
Title:Prediction of Commodities Market by Using Data Mining Technique
Author Name:S. Parkavi, S. Sasikumar and M. Venkatesh Saravanakumar 
Synopsis:
Data mining is a technology which is used to find interesting pattern between huge datasets. Commodity market is said to be a huge collection of various commodities, Gold, Oil, etc, which are referred to as hard commodities. In ancient days, gold coins were a medium of exchange. Another important commodity is oil. The price of oil changes daily, which has an impact on every goods and services provided. A country can make a payment via paper currency. This mode can be changed to exchange of gold at a fixed rate. The exchange rate between currencies was based on the amount of currencies needed to purchase one ounce of gold. US dollar is widely accepted as an instrument of global currency exchange. The gold price is directly related to USD. This paper examines the relationship between the rate of gold and oil with respect to USD. This also explores the commodity Market based on USD. The price of Gold, Oil and the US dollar share different relationships, in different circumstances. This paper explores the interesting pattern that exists in the commodity market.

Application of Data mining Techniques In Higher Education System

Vol. 4  Issue 1
Year: 2016
Issue:Mar-May
Title:Application of Data mining Techniques In Higher Education System
Author Name:J. Albunskuba and M. Venkatesh Saravanakumar
Synopsis:
Data mining is used to extract meaningful information and develop significant relationship among variables stored in large data. Data mining techniques can be applied in various fields like microbiology, bio-informatics, medical imaging, finance, healthcare, education, etc. Education is the one of the fields where we can apply data mining algorithms to find unidentified patterns. An educational organization is one of the most important part of a society that plays a vital role in growth and development of any nation. The main objective of higher education institution is to provide quality education to their students. This paper predicts the relationship between post-graduation and research enrollment of students. In this paper, the authors present a model in context of higher education admission. In this paper, student enrollment for post-graduation and research from various higher education providers of UK for 5 consequent academic years (2009-2014) has been taken as the data set. Based on candidate enrollment, association rule mining has been applied and classified to show the variations in admission. Based on findings, decision makers from Higher Education System (HES) can frame norms to improve students enrollment.

Mining Fuzzy Association Rules using Various Algorithms: A Survey

Vol. 4  Issue 1
Year: 2016
Issue:Mar-May
Title:Mining Fuzzy Association Rules using Various Algorithms: A Survey
Author Name:O. Gireesha and O. Obulesu
Synopsis:
Data mining is the process of extracting knowledge from large databases. Different types of knowledge and technology were introduced for the data mining in the last decade. Among them, finding association rules from transactional data is a common task in day-to-day life. The majority of studies deal with how binary valued transaction data is possibly handled. In real-world applications, the transaction of data consists of quantitative and fuzzy values. So, data mining algorithms have to deal with different types of data present as a confront to researchers in this field. In this paper, the authors discussed a few of the fuzzy mining concepts and techniques, along with the algorithms correlated to association rule discovery. Some fuzzy mining techniques including mining fuzzy association rules, and the membership functions are described in this paper.

Privacy Policy and Deduplication of User Uploaded Images on Social Websites

Vol. 3  Issue 4
Year: 2016
Issue:Dec-Feb
Title:Privacy Policy and Deduplication of User Uploaded Images on Social Websites
Author Name:J. Santhiya, N. Kalaivani, K. Nirosha, R. Jayanthinisha, and K. Kamatchi 
Synopsis:
Usage of social medias have increased now-a-days. User share personal information and images through social network. While maintaining the privacy has become a major problem and also duplication of images have reduced the capacity of the databases. The authors a two level framework for maintaining privacy and securing the photo share. Users who want to maintain privacy are usually provided with access control. The canny edge detection technique is used for the deduplication of images. It increases the storage space capacity of the database. Watermarking is used for every image that is shared on the website for the copyright protection and restricting the images on other websites.

Parameter Selection Using Fruit Fly Optimization

Vol. 3  Issue 4
Year: 2016
Issue:Dec-Feb
Title:Parameter Selection Using Fruit Fly Optimization
Author Name:R.S. Shudapreyaa, and S. Anandamurugan
Synopsis:
Fruit fly algorithm is a novel perspicacious optimization algorithm predicated on the foraging comportment of the authentic fruit flies. Recently, an incipient Fruit Fly Optimization Algorithm (FOA) has been proposed to solve optimization quandaries. In order to find optimum solution for an optimization quandary, fine-tuned parameters are obtained as a result of manual test in the fruit fly algorithm. This study deals with enhancing the probing efficiency and greatly ameliorate the probing quality and also on an automated approach for finding the cognate parameter by utilizing a grid search algorithm. Also it provides better ecumenical probing ability, more expeditious convergence, and more precise convergence. The optimization of a sizably voluminous antenna array for maximum directivity, utilizing a modified fruit fly optimization algorithm with desultory search of two groups of swarm and adaptive fruit fly swarm population size.

Comparative Analysis of RSA and Modified RSA Cryptography

Vol. 3  Issue 4
Year: 2016
Issue:Dec-Feb
Title:Comparative Analysis of RSA and Modified RSA Cryptography
Author Name:Madhurima Dubey, and Yojana Yadav 
Synopsis:
In RSA (Rivest-Shamir-Adleman) cryptography, the basic factors are key length, calculation time, security, authentication and integrity. Generally, in public key cryptography, the key length and security is directly proportional to each other. Original RSA uses two prime numbers as input, which gives the modulus 'n'; encryption and decryption process depends on modulus 'n'. The attacker can easily break the 'n' into two factors of prime number and so to avoid this problem, the authors have used three large prime numbers, it will increase the brute force time to factorize 'n'. This paper mainly focuses on the number of prime numbers used, security and time.

Global Optimization for the Forward Neural Networks and Their Applications

Vol. 3  Issue 4
Year: 2016
Issue:Dec-Feb
Title:Global Optimization for the Forward Neural Networks and Their Applications
Author Name:K. Sunil Manohar Reddy, G. Ravindra Babu, and S. Krishna Mohan Rao
Synopsis:
This paper describes and evaluates several global optimization issues of Artificial Neural Networks (ANN) and their applications. In this paper, the authors examine the properties of the feed-forward neural networks and the process of determining the appropriate network inputs and architecture, and built up a short-term gas load forecast system - the Tell Future system. This system performs very well for short-term gas load forecasting, which is built based on various Back- Propagation (BP) algorithms. The standard Back-Propagation (BP) algorithm for training feed-forward neural networks have proven robust even for difficult problems. In order to forecast the future load from the trained networks, the history loads, temperature, wind velocity, and calendar information should be used in addition to the predicted future temperature and wind velocity. Compared to other regression methods, the neural networks allow more flexible relationships between temperature, wind, calendar information and load pattern. Feed-forward neural networks can be used in many kinds of forecasting in different industrial areas. Similar models can be built to make electric load forecasting, daily water consumption forecasting, stock and markets forecasting, traffic flow and product sales forecasting.

Extensions to Round Robin Scheduling: Comparison of Algorithms

Vol. 3  Issue 4
Year: 2016
Issue:Dec-Feb
Title:Extensions to Round Robin Scheduling: Comparison of Algorithms
Author Name:Ruwanthini Siyambalapitiya
Synopsis:
Round Robin (RR) scheduling algorithm is a widely used scheduling algorithm in timesharing systems, as it is fair to all processes and is free of starvation. The performance of the Round Robin algorithm depends very much on the size of the time quantum selected. If the time quantum is too large, the performance of the algorithm would be similar to that of FCFS (First Come First Serve) scheduling. On the other hand, if the time quantum is too small, the number of context switches will be large. Therefore, it is necessary to have some idea about the optimum level of time quantum, so that the average waiting time and turnaround times, and the number of context switches are not too large. Several extensions to the round robin algorithm have been proposed in the literature to overcome these difficulties. In this study, the author has picked some of these extensions and tried comparing their effectiveness by means of some examples.

Predicting the Existence of Mycobacterium Tuberculosis Using Hybrid Neuro Adaptive System

Vol. 3  Issue 3
Year: 2015
Issue:Sep-Nov
Title:Predicting the Existence of Mycobacterium Tuberculosis Using Hybrid Neuro Adaptive System
Author Name:Navneet Walia, Harsukhpreet Singh and Anurag Sharma
Synopsis:
This paper introduces a systematic approach for design of fuzzy inference system based on the class of neural network to predict the existence of Mycobacterium tuberculosis. Fuzzy systems have reached a recognized success in several applications to solve diverse class of problems. Currently, there is an existence trend to expand them in medical field and using them with adaptation capabilities through combination with other various techniques. This article focus on the development of data mining solution using Adaptive Neuro Fuzzy Inference System (ANFIS) that makes diagnosis of tuberculosis bacteria as precise as possible and helps in deciding whether it is reasonable to start treatment without waiting for the accurate medical tests. Dataset are collected from 200 different patient records which are obtained from health clinic (consent of physicians and patients). Patient record has 19 different input attributes which covers demographic and medical test data. The transparency, objectivity and easy implementation of the proposed method generates classes of tuberculosis that suits the need of pulmonary physicians and decrease the time consumed in generating diagnosis provide a useful way to start diagnosis in more reasonable and fairer manner.

Speaker Recognition Using Dynamic Time Warping Polynomial Kernel SVM with Confusion Matrix

Vol. 3  Issue 3
Year: 2015
Issue:Sep-Nov
Title:Speaker Recognition Using Dynamic Time Warping Polynomial Kernel SVM with Confusion Matrix
Author Name:Piyush Mishra, Piyush Lotia
Synopsis:
In this paper, the authors have presented an efficient algorithm for improving the performance of speaker verification system by using polynomial kernel support vector machine along with dynamic time warping. The objective of speaker verification is to verify the identity of the speaker by characterizing the information of speaker. The idea is to improve the accuracy of Support Vector Machine (SVM) classifier with the combination of dynamic time warping and polynomial kernel. The resultant of SVM has higher degree of precision as well as accuracy. To characterize the classification accuracy and precision, we use a technique called as confusion matrix. The authors have performed the experiment over database of 30 speakers including male and female voices. The polynomial kernel SVM is used here to improve the accuracy.

Efficient Agent Based Priority Scheduling and Load Balancing Using Fuzzy Logic in Grid Computing

Vol. 3  Issue 3
Year: 2015
Issue:Sep-Nov
Title:Efficient Agent Based Priority Scheduling and Load Balancing Using Fuzzy Logic in Grid Computing
Author Name:Neeraj Rathore
Synopsis:
Grid computing is the process of applying more computer resources to solve the complex problem. Load balancing and resource management are the major problems in grid computing. The goal of the load balancing algorithms is to allocate the load on grid resources to exploit their utilization while decreasing the total task execution time. To overcome these problems, the author has proposed an efficient agent based priority scheduling and fuzzy logic load balancing algorithm. The major role of priority scheduling is to assign priority to jobs and to assign jobs to available resources. After scheduling the jobs to resource, loads are balanced using the fuzzy rules. In this proposed scheme, the fuzzy rules are generated using the resource CPU (Central Processing Unit) speed, memory capacity and current load. The performance analysis shows that, the proposed priority scheduling and fuzzy load balancing can improve the overall performance of the grid computing resource.

A Viable Solution to Prevent SQL Injection Attack Using SQL Injection

Vol. 3  Issue 3
Year: 2015
Issue:Sep-Nov
Title:A Viable Solution to Prevent SQL Injection Attack Using SQL Injection
Author Name:Bharti Nagpal , Naresh Chauhan and Nanhay Singh 
Synopsis:
Increased usage of web applications in recent years has emphasized the need to achieve confidentiality, integrity and availability of web applications. Web applications are used by the organizations to provide services like online banking, online shopping, social networking, etc. So people expect these applications to be secure and reliable when they are paying bills, shopping online, making transactions, etc. These web applications consist of underlying databases containing confidential user's information like financial information records, medical information records, personal information records which are highly sensitive and valuable, which in turn makes web applications as an ideal target for external attacks such as Structured Query Language (SQL) Injection. In fact, SQL Injection is categorized as the top-10 2010 web application vulnerabilities experienced by web applications according to OWASP (Open Web Application Security Project) [1]. There is an emerging need to handle such attacks to secure the stored information.

Design and Performance Analysis Between Different Data Sources

Vol. 3  Issue 3
Year: 2015
Issue:Sep-Nov
Title:Design and Performance Analysis Between Different Data Sources
Author Name:Sarmad M Hadi
Synopsis:
Data source represents the core part of any application. Web application can get the benefit of multiple data sources at the same time. This paper, focuses on delivering a platform independent application. Web applications are popular due to the ubiquity of web browsers, and the convenience of using a web browser as a client to update and maintain web applications without distributing and installing software on potentially thousands of client computers is a key reason for their popularity. This paper compares two different sources, Microsoft access database and MY-SQL data source. The comparison has been made according to the simplicity of the design of both the systems and performance. PHP has been used to build and produce both systems, and WAMP server has been used to test the systems provided in this paper along with a mid-range computers. This paper, shows in practice that the PHP-MYSQL “couple” still applicable and usable and has the both free and fast benefits.

Thursday, 5 January 2017

Investigation of Validity Metrics for Modified K-Means Clustering Algorithm

Vol. 3  Issue 2
Year: 2015
Issue:Jun-Aug
Title:Investigation of Validity Metrics for Modified K-Means Clustering Algorithm
Author Name:S. Govinda Rao and A. Govardhan 
Synopsis:
Clustering analysis is used to partition data set based on objects within a group and the clustering results are influenced by choice of distance measure and the clustering algorithm. Clustering analysis has been applied to group of author's hindex and g-index with similar or dissimilar features. Validity measure is calculated to determine which is the best clustering by finding the minimum value for our measure. In this paper, the authors have presented the effective validations possible with Davies-Bouldin index, Silhouette index and quantization error

A Hybrid Approach for Replica Placement-Replacement (Harp-R Algo) Algorithm In Data-Grid

Vol. 3  Issue 2
Year: 2015
Issue:Jun-Aug
Title:A Hybrid Approach for Replica Placement-Replacement (Harp-R Algo) Algorithm In Data-Grid
Author Name:Ashish Kumar Singh and Udai Shanker
Synopsis:
Distributed database system is a network in which multiple clients are connected logically, but physically they are distributed and each client has their own database. Replication process in such a environment plays a vital role for reducing response time. The process of creating exact copy of file is known as replication. Replication process can be executed in two ways, one is static replication and another is dynamic replication. In a static replication, created replica exist in the system till user deletes it manually or it’s time expired. In dynamic replication, it will behave with change in user behavior and it will automatically create new replicas or delete replicas to improve performance. Data replication, as one of the popular services in a distributed database system, is used to increase the data availability and scalability. In this paper, the authors have proposed a data replication protocol in which they replicate those file which are popular in all, and replication process for those file whose success rate is high. In this paper proposed system uses the advantages of both static and dynamic replication.

Android App for ATM Location and Service Tracking

Vol. 3  Issue 2
Year: 2015
Issue:Jun-Aug
Title:Android App for ATM Location and Service Tracking
Author Name:B. Soundhariya Lakshmi, K. Ramya and D. Jagadeesan
Synopsis:
ATM (Automated Teller Machine) is important in our day-today lives for withdrawing money from our bank accounts. Android application for ATM finder is used to find ATMs around the user’s place, and it will reduce the time of searching the ATM. This application is a advanced and simple application in which ATM locator helps to find not only ATM's around us but also their service availability such as its working condition. This quick click series app is built for speed and ease of use. Types of ATM locations are preloaded to assist us with the optional name search, and it can also be done with the use of GPS. The ATM will periodically update its current status to the Bank, and in the proposed system the updated information from the Bank is obtained and utilized in the app to track the ATMs current service (Active or De-active). This will reduce the user's valuable time by making available the current working status of the ATM.

Dynamic Coalition Pattern for Distributed Coordination Of Multi-Agent Networks

Vol. 3  Issue 2
Year: 2015
Issue:Jun-Aug
Title:Dynamic Coalition Pattern for Distributed Coordination Of Multi-Agent Networks
Author Name:K. Ankitha Priyadarshini and B. Lalitha
Synopsis:
Self-organization presents a suitable model for building complex distributed systems which are self-managed. Here, the self-organizing behavior in multi-agent system is one of the most interesting aspect, where, an integrative selforganization mechanism is employed which associates the three principles of self-organization such as cloning, resource exchange and relation adaptation. Estimating the performance of this technique, it defeats when examined with three distinct assessments here and the scenario pretends to be agents with identical capacity. In this research paper the authors have considered agents with distinct capacities and the authors have aimed to design a paradigm to associate dynamic coalition pattern formation technique with self-organization, in an agent network which is well structured. Depending on self-organization rules, coalition formation is incorporated and it facilitates agents to dynamically adjust their degrees of entanglement in distinct coalitions and to unite new coalitions at any time if necessary. To attain the self-adaptation concept, a covenant protocol is employed.

UNIFI: A Protocol for Association Rule Mining In Vertically Distributed Databases

Vol. 3  Issue 2
Year: 2015
Issue:Jun-Aug
Title:UNIFI: A Protocol for Association Rule Mining In Vertically Distributed Databases
Author Name:M. Suresh Babu and K. F. Bharati 
Synopsis:
Data warehouses or databases may store large amount of data. In such databases, much processing power is needed for mining association rules. Therefore the solution used is a distributed system. In data mining, association rules are useful for analyzing and predicting customer behavior. They play an important part in shopping basket data analysis, product clustering, catalogue design and store layout. In this paper, the authors have used a protocol Unifying Lists of Locally Frequent Item sets (UNIFI) [1] for mining association rules in vertically partitioned data. In this Proposed system, the authors have aimed to implement the UNIFI protocol for Association Rule Mining in vertically distributed database. This protocol depends on the Fast Distributed Mining (FDM) algorithm like UNIFI-KC (Kantarcioglu and Clifton) in [6]. FDM is an unsecured version of the apriori algorithm.

An Efficient Ticket Based Mutual Authentication Between User And Server For Secure Data Transmission

Vol. 3  Issue 1
Year: 2015
Issue:Mar-May
Title:An Efficient Ticket Based Mutual Authentication Between User And Server For Secure Data Transmission
Author Name:Dr. D. Srujan Chandra Reddy and V. V. Sunil Kumar
Synopsis:
The increased use of the internet today can be attributed to increase in population. Protecting data in many times tougher than securing valueables. Especially, in data transmission between user and centralized server it is difficult to protect data from attackers; to predict attackers, users must cross verify many times, and both user and server must be mutually authenticated. The authors studied the previous research work and they introduced methods for mutual authentication but still the previous method is vulnerable from attacks like denial of service attack, password guessing, masquerade, etc. The authors have done cryptanalysis on previous research work and come to know that still their method is vulnerable and therefore the authors have proposed a new method of ticket based mutual authentication between user and server. Finally, in this paper the authors have done security analysis and explain how this method predicts attacks which are possible in the existing method.

A Computational Intelligence Technique for Effective Medical Diagnosis Using Decision Tree Algorithm

Vol. 3  Issue 1
Year: 2015
Issue:Mar-May
Title:A Computational Intelligence Technique for Effective Medical Diagnosis Using Decision Tree Algorithm
Author Name:Panigrahi Srikanth, Ch.Anusha and Dharmaiah Devarapalli 
Synopsis:
Now-a-days humankind suffer from many health complications. People are affected by progressive diseases (like as Heart, Diabetes, AIDS, Hepatitis and Fibroid) and their complications. Data mining (also known as knowledge discovery) is the process of summarizing the data into useful information by analyzing data from different perspectives. Data Mining is a technology for processing large volume of data that combines traditional data analysis methods with highly developed algorithms. Data mining techniques can be used to support a wide range of security and business applications such as work flow management, customer profiling and fraud detection. It can be also used to predict the outcome of future observations. Data mining techniques can be developed by the Decision Tree Algorithm. According to a recent survey of the World Health Organization (WHO), all diseases and its complications are problematic health hazards of this century. A better and early diagnosis of disease may improve the lives of all people affected and people may lead healthy lives. In this paper, the authors present the Decision Tree Algorithm for better diagnosis of diseases using Association Rule mining. Using this computational intelligence technique the authors tested the performance of the method using disease data sets. The authors presented a better algorithm which is used to calculate sensitivity, specificity, comprehensibility and rule length. This gain and gain ratio achieved has promising accuracy.

Information Extraction with Semantic Clustering

Vol. 3  Issue 1
Year: 2015
Issue:Mar-May
Title:Information Extraction with Semantic Clustering
Author Name:H. Balaji and Dr. A. Govardhan
Synopsis:
Generally, Information Extraction (IE) is concentrated on fulfilling exact, restricted, pre-specified solicitations from homogeneous corpora (e.g., extract the area and time of courses from a set of declarations). Information Extraction (IE) customarily relied with comprehensive individual engagement by means of hand crafted extraction guidelines as well as hand-tagged instruction illustrations. Furthermore, the user is required to clearly pre specify about every relative associated with attention. Data extraction in information control is usually associated with the artificial thinking ability throughout, and the progress associated with methods as well as algorithms for those aspects of words evaluation, as well as their laptop or computer enactment. Moving to another space requires the client to name the target relations and to physically make new extraction tenets or hand-label new preparing cases. This difficult work scales directly with the quantity of target relations. This paper, explains extraction strategy for site data focused around DOM to enhance the seeking proficiency, which is to safeguard the topic data, and to channel out the commotion data that the clients are not inspired by. The experiments are done by taking different data sets. The proposed semantic clustering gives the best way to extract the information from web than existing techniques. The experimental results clearly show that the proposed technique gives better results when compared to existing techniques.