I keep finding files being converted to encodings other then Unicode however XP's Windows Explorer does not have a detail column for showing the encoding of files in a directory. For those who are not sure what I'm talking about open a folder on XP (I do not use Vista or 7 and 90% of what I use in XP was removed in Vista and 95% removed in 7), click the view menu at the top, and then choose 'details' if it's not already set. In the details view of a folder if you right-click on the columns (Name, size, type, date modified in example) you can customize the columns however there are no columns for viewing the file encoding the files were saved as.
I'm wondering if there is a program (not the console/terminal) in Linux regardless of distro that would allow me to quickly/visually scan my directories for non-unicode code/text files?
Linux/Unix doesn't store information about file type, unlike some OS's. All files are just sequences of bytes to Linux. Determining the type of file is a fairly complex process where often the file has to be opened and read to scan for "magic" that identifies the file type, such as Java's infamous hex string "0xCAFEBABE" in classfiles. Which is what the "file" program is all about.
While XML files carry the character set encoding in the first line, many file types do not, and the best that can be done is to scan the whole file and make a guess. Which can take a lot of resources, so it isn't done as a matter of course.
An IDE is no substitute for an Intelligent Developer.