but what if the queryas in case of our Application consisting of lakhs of Records , how can i optimize it ??
Typically, don't return 100,000s of records. The only applications that tend to need this amount of data are bulk loading tools. For applications that are intended for humans to use if your ResultSet is much bigger than a few hundred records your probably just returning unhelpful noise.
Some databases provide mechanisms to page ResultSets, though not all, so its usually a good idea to be a bit more specific in your query.
Something no one has mentioned yet (so I will) its usually a bad idea to guess how a query engine might perform a query, especially when there are tools to show you how it will behave. Run your query through your database's analyzer tool (whatever that is) and see what it says. For example, adding an index to the table if the table is small or the number of different values in the field are few, is probably pointless since a decent database query engine will correctly spot a table scan as much the same efficiency, and the overhead of maintaining an index will slow your application down not speed it up.
1) you read the whole table - typically you want to avoid this in queries, but you haven't provided any search criteria
2) you sort the results.
There should also be a 'cost' involved IMO it should be kept to a single figure if possible but I've also had rarely run reporting queries that were off the scale.
Your example is still pretty simple, it gets more interesting when you have multiple joins, sub queries, matching, indices and the rest of the database features.
The attached screenshot is from a MySQL database viewed through SquirrelSQL. I hid some names to protect the guilty ;)
The second two lines show two table joins and shows that it uses the primary key and only looks at a single row each time - regardless of how many records are in in the table.
The first line is for the string match, and it is forced to look at all 65 rows on the table (same as 'FULL' in your example). Not a huge problem but gets bad if the table grows.
If an index is created in that column the value drops to a single figure where it only access the exact rows required. Similar to a hash value, the search now only needs to look at rows that are a possible match and can guess these matches before actually needing to execute the query.
typically it means it needs to look at 3000 rows to find a match and can't simplify that join.
To improve it, try to join on numeric fields rather than text, always join by primary keys, and always index foreign keys.