Abstract:
Databases contains very large datasets, where various duplicate records are present. The duplicate records occur when data entries are stored in a uniform manner in the database, resolving the structural heterogeneity problem. Detection of duplicate records are difficult to find and it take more execution time. In this literature survey papers various techniques used to find duplicate records in database but there are some issues in this techniques. To address this Progressive algorithms has been proposed for that significantly increases the efficiency of finding duplicates if the execution time is limited and improve the quality of records.
Reference this Research Paper (copy & paste below code):
Mohd Shoaib Amir Khan (2018); Progressive Identification of Duplicity;
Int J Sci Res Publ 6(4) (ISSN: 2250-3153). http://www.ijsrp.org/research-paper-0416.php?rp=P525248