I am going through different aspects of Hadoop and always start comparing it with RDBMS. Just wanted to know - Is database really slow with huge/big data. If yes, why is it slow?
Does it mean - data search with 1 MB of data is faster than 1 TB of Data inside Database? If correct, that means search in database does not scale in linear fashion?
Is it because data seems to be more structured and hence, due to more data and to fetch the required data, it has to jump from table to table (due to FK relationship with other tables), (in other words reading data from different Disk sector takes time. Example - Table 1 data is inside sector 1, table 2 in sector 100. To get the complete data, read operation is must from both the tables), that finally increase the disk seeks operation. Disk seek operation make data base search really slow.
Please correct my understanding or share your thoughts on the top of it.