This is confusing, because you can't select both, even tough UTF-8 *is* unicode. Looking at the help, I see that "unicode" assumes two bytes per character. Sooo..... That's UTF-16 (well, almost - see below). Call it that, to make it less confusing.
Unicode is NOT a character encoding. It's a standard that defines positions of characters in a massively large character set consisting of almost 110,000 characters.
UTF8 is just the most popular encoding, and UTF-16 is usually used for internal filetypes, and performs better than UTF-8 for large quantities of higher-than-0x80 characters. But mind you, UTF-16 still allows for 4 bytes per character, just like UTF-8 allows for more than 1 byte per character.
There's also the massively wasteful UTF-32 encoding, which has a fixed 4 bytes per character. On some systems, this might be neccesary for certain haracters, because there are more than 65535 positions in the unicode "set". I won't be mentioning UTF-7 that's used to squeeze through archaic systems that assume "7 bits is quite enough, thank you".
So that's my 2c

Can this confusion be "fixed" in a next version?