I have some old HTML files which were downloaded and saved on disk around 2004. The text content is in greek but in various encodings. I want them all converted into UTF-8
. Some have the charset declared in a <meta>
tag. Some don't.
I can see all of them if I open them in Firefox.
In Linux, I use 2 methods to detect the encoding from the command line:
file -i file.html
and
enca -Lnone file.html
And after that I convert them to UTF-8
with:
iconv -f fromenc -t 'UTF-8' file.html > fixed.html
However, some of the output files do not look correctly neither on the command line nor in Firefox.
When the corresponding input files are opened via Firefox they render correctly. And Firefox reports their encoding as windows-1253
(via Firefox's Tools->Page Info
menu).
However, the encoding of these input files is reported by both enca
and file
as iso-8859-1
. Is this incorrect detection?
If I try to convert them to UTF-8
with iconv
with this iso-8859-1
as the from-encoding, I get cyrillic characters.
But when I try to convert them with windows-1253
as the from-encoding, they look OK both on command line and Firefox.
What's more, those files have iso-8859-1
as their charset in a <meta>
tag. And Firefox still renders them fine (and then reports windows-1253
encoding via the menu).
So, I am looking for a more reliable method of detecting the encoding of these files on the Linux command line or even via js, Python or Perl.
Here is example content from such an input file which Firefox reports its encoding as windows-1253
(but its meta tag, file -i
and enca
report it -- all three of them -- as iso-8859-1
):
Δενείναιμόνο
and as a hexdump:
e5c4 20ed dfe5 e1ed 20e9 fcec efed 000a
wrongly rendered with cyrillic characters after the iconv command:
ƒенеянбймьнп