unicode.rst 31.1 KB
Newer Older
1 2
.. _unicode-howto:

3 4 5 6
*****************
  Unicode HOWTO
*****************

7
:Release: 1.12
8

9
This HOWTO discusses Python support for Unicode, and explains
Benjamin Peterson's avatar
Benjamin Peterson committed
10
various problems that people commonly encounter when trying to work
11
with Unicode.
12

13 14 15 16 17 18 19 20
Introduction to Unicode
=======================

History of Character Codes
--------------------------

In 1968, the American Standard Code for Information Interchange, better known by
its acronym ASCII, was standardized.  ASCII defined numeric codes for various
21 22
characters, with the numeric values running from 0 to 127.  For example, the
lowercase letter 'a' is assigned 97 as its code value.
23 24 25 26 27 28 29 30

ASCII was an American-developed standard, so it only defined unaccented
characters.  There was an 'e', but no 'é' or 'Í'.  This meant that languages
which required accented characters couldn't be faithfully represented in ASCII.
(Actually the missing accents matter for English, too, which contains words such
as 'naïve' and 'café', and some publications have house styles which require
spellings such as 'coöperate'.)

31 32
For a while people just wrote programs that didn't display accents.
In the mid-1980s an Apple II BASIC program written by a French speaker
33 34 35
might have lines like these:

.. code-block:: basic
36

37 38
   PRINT "MISE A JOUR TERMINEE"
   PRINT "PARAMETRES ENREGISTRES"
39

40 41
Those messages should contain accents (terminée, paramètre, enregistrés) and
they just look wrong to someone who can read French.
42 43 44 45 46

In the 1980s, almost all personal computers were 8-bit, meaning that bytes could
hold values ranging from 0 to 255.  ASCII codes only went up to 127, so some
machines assigned values between 128 and 255 to accented characters.  Different
machines had different codes, however, which led to problems exchanging files.
47
Eventually various commonly used sets of values for the 128--255 range emerged.
48 49 50
Some were true standards, defined by the International Organization for
Standardization, and some were *de facto* conventions that were invented by one
company or another and managed to catch on.
51 52 53

255 characters aren't very many.  For example, you can't fit both the accented
characters used in Western Europe and the Cyrillic alphabet used for Russian
54
into the 128--255 range because there are more than 128 such characters.
55 56 57 58 59 60 61 62 63 64 65 66

You could write files using different codes (all your Russian files in a coding
system called KOI8, all your French files in a different coding system called
Latin1), but what if you wanted to write a French document that quotes some
Russian text?  In the 1980s people began to want to solve this problem, and the
Unicode standardization effort began.

Unicode started out using 16-bit characters instead of 8-bit characters.  16
bits means you have 2^16 = 65,536 distinct values available, making it possible
to represent many different characters from many different alphabets; an initial
goal was to have Unicode contain the alphabets for every single human language.
It turns out that even 16 bits isn't enough to meet that goal, and the modern
67 68
Unicode specification uses a wider range of codes, 0 through 1,114,111 (
``0x10FFFF`` in base 16).
69 70 71 72 73

There's a related ISO standard, ISO 10646.  Unicode and ISO 10646 were
originally separate efforts, but the specifications were merged with the 1.1
revision of Unicode.

74 75 76 77
(This discussion of Unicode's history is highly simplified.  The
precise historical details aren't necessary for understanding how to
use Unicode effectively, but if you're curious, consult the Unicode
consortium site listed in the References or
78
the `Wikipedia entry for Unicode <https://en.wikipedia.org/wiki/Unicode#History>`_
79
for more information.)
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94


Definitions
-----------

A **character** is the smallest possible component of a text.  'A', 'B', 'C',
etc., are all different characters.  So are 'È' and 'Í'.  Characters are
abstractions, and vary depending on the language or context you're talking
about.  For example, the symbol for ohms (Ω) is usually drawn much like the
capital letter omega (Ω) in the Greek alphabet (they may even be the same in
some fonts), but these are two different characters that have different
meanings.

The Unicode standard describes how characters are represented by **code
points**.  A code point is an integer value, usually denoted in base 16.  In the
95 96 97 98 99
standard, a code point is written using the notation ``U+12CA`` to mean the
character with value ``0x12ca`` (4,810 decimal).  The Unicode standard contains
a lot of tables listing characters and their corresponding code points:

.. code-block:: none
100

101 102 103 104 105
   0061    'a'; LATIN SMALL LETTER A
   0062    'b'; LATIN SMALL LETTER B
   0063    'c'; LATIN SMALL LETTER C
   ...
   007B    '{'; LEFT CURLY BRACKET
106 107

Strictly, these definitions imply that it's meaningless to say 'this is
108
character ``U+12CA``'.  ``U+12CA`` is a code point, which represents some particular
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
character; in this case, it represents the character 'ETHIOPIC SYLLABLE WI'.  In
informal contexts, this distinction between code points and characters will
sometimes be forgotten.

A character is represented on a screen or on paper by a set of graphical
elements that's called a **glyph**.  The glyph for an uppercase A, for example,
is two diagonal strokes and a horizontal stroke, though the exact details will
depend on the font being used.  Most Python code doesn't need to worry about
glyphs; figuring out the correct glyph to display is generally the job of a GUI
toolkit or a terminal's font renderer.


Encodings
---------

To summarize the previous section: a Unicode string is a sequence of code
125
points, which are numbers from 0 through ``0x10FFFF`` (1,114,111 decimal).  This
126 127 128
sequence needs to be represented as a set of bytes (meaning, values
from 0 through 255) in memory.  The rules for translating a Unicode string
into a sequence of bytes are called an **encoding**.
129 130

The first encoding you might think of is an array of 32-bit integers.  In this
131 132 133
representation, the string "Python" would look like this:

.. code-block:: none
134 135

       P           y           t           h           o           n
136 137
    0x50 00 00 00 79 00 00 00 74 00 00 00 68 00 00 00 6f 00 00 00 6e 00 00 00
       0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
138 139 140 141 142 143 144

This representation is straightforward but using it presents a number of
problems.

1. It's not portable; different processors order the bytes differently.

2. It's very wasteful of space.  In most texts, the majority of the code points
145
   are less than 127, or less than 255, so a lot of space is occupied by ``0x00``
146 147
   bytes.  The above string takes 24 bytes compared to the 6 bytes needed for an
   ASCII representation.  Increased RAM usage doesn't matter too much (desktop
148
   computers have gigabytes of RAM, and strings aren't usually that large), but
149 150 151 152 153 154 155 156 157
   expanding our usage of disk and network bandwidth by a factor of 4 is
   intolerable.

3. It's not compatible with existing C functions such as ``strlen()``, so a new
   family of wide string functions would need to be used.

4. Many Internet standards are defined in terms of textual data, and can't
   handle content with embedded zero bytes.

Benjamin Peterson's avatar
Benjamin Peterson committed
158 159 160
Generally people don't use this encoding, instead choosing other
encodings that are more efficient and convenient.  UTF-8 is probably
the most commonly supported encoding; it will be discussed below.
161 162

Encodings don't have to handle every possible Unicode character, and most
163 164
encodings don't.  The rules for converting a Unicode string into the ASCII
encoding, for example, are simple; for each code point:
165 166 167 168 169 170 171 172 173

1. If the code point is < 128, each byte is the same as the value of the code
   point.

2. If the code point is 128 or greater, the Unicode string can't be represented
   in this encoding.  (Python raises a :exc:`UnicodeEncodeError` exception in this
   case.)

Latin-1, also known as ISO-8859-1, is a similar encoding.  Unicode code points
174
0--255 are identical to the Latin-1 values, so converting to this encoding simply
175 176 177 178 179 180 181 182 183 184 185 186
requires converting code points to byte values; if a code point larger than 255
is encountered, the string can't be encoded into Latin-1.

Encodings don't have to be simple one-to-one mappings like Latin-1.  Consider
IBM's EBCDIC, which was used on IBM mainframes.  Letter values weren't in one
block: 'a' through 'i' had values from 129 to 137, but 'j' through 'r' were 145
through 153.  If you wanted to use EBCDIC as an encoding, you'd probably use
some sort of lookup table to perform the conversion, but this is largely an
internal detail.

UTF-8 is one of the most commonly used encodings.  UTF stands for "Unicode
Transformation Format", and the '8' means that 8-bit numbers are used in the
187 188
encoding.  (There are also a UTF-16 and UTF-32 encodings, but they are less
frequently used than UTF-8.)  UTF-8 uses the following rules:
189

190 191 192
1. If the code point is < 128, it's represented by the corresponding byte value.
2. If the code point is >= 128, it's turned into a sequence of two, three, or
   four bytes, where each byte of the sequence is between 128 and 255.
193

194 195 196
UTF-8 has several convenient properties:

1. It can handle any Unicode code point.
197
2. A Unicode string is turned into a sequence of bytes containing no embedded zero
198 199 200 201
   bytes.  This avoids byte-ordering issues, and means UTF-8 strings can be
   processed by C functions such as ``strcpy()`` and sent through protocols that
   can't handle zero bytes.
3. A string of ASCII text is also valid UTF-8 text.
202 203
4. UTF-8 is fairly compact; the majority of commonly used characters can be
   represented with one or two bytes.
204 205 206 207 208 209 210 211 212
5. If bytes are corrupted or lost, it's possible to determine the start of the
   next UTF-8-encoded code point and resynchronize.  It's also unlikely that
   random 8-bit data will look like valid UTF-8.



References
----------

213
The `Unicode Consortium site <http://www.unicode.org>`_ has character charts, a
214
glossary, and PDF versions of the Unicode specification.  Be prepared for some
215 216
difficult reading.  `A chronology <http://www.unicode.org/history/>`_ of the
origin and development of Unicode is also available on the site.
217

218
To help understand the standard, Jukka Korpela has written `an introductory
219
guide <http://jkorpela.fi/unicode/guide.html>`_ to reading the
220
Unicode character tables.
221

222
Another `good introductory article <https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/>`_
223
was written by Joel Spolsky.
224 225
If this introduction didn't make things clear to you, you should try
reading this alternate article before continuing.
226

227
Wikipedia entries are often helpful; see the entries for "`character encoding
228 229
<https://en.wikipedia.org/wiki/Character_encoding>`_" and `UTF-8
<https://en.wikipedia.org/wiki/UTF-8>`_, for example.
230 231


232 233
Python's Unicode Support
========================
234 235 236 237

Now that you've learned the rudiments of Unicode, we can look at Python's
Unicode features.

238 239
The String Type
---------------
240

241
Since Python 3.0, the language features a :class:`str` type that contain Unicode
242
characters, meaning any string created using ``"unicode rocks!"``, ``'unicode
243
rocks!'``, or the triple-quoted string syntax is stored as Unicode.
244

245 246 247 248 249 250
The default encoding for Python source code is UTF-8, so you can simply
include a Unicode character in a string literal::

   try:
       with open('/tmp/input.txt', 'r') as f:
           ...
251
   except OSError:
252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269
       # 'File not found' error message.
       print("Fichier non trouvé")

You can use a different encoding from UTF-8 by putting a specially-formatted
comment as the first or second line of the source code::

   # -*- coding: <encoding name> -*-

Side note: Python 3 also supports using Unicode characters in identifiers::

   répertoire = "/tmp/records.log"
   with open(répertoire, "w") as f:
       f.write("test\n")

If you can't enter a particular character in your editor or want to
keep the source code ASCII-only for some reason, you can also use
escape sequences in string literals. (Depending on your system,
you may see the actual capital-delta glyph instead of a \u escape.) ::
270 271 272 273 274 275 276

   >>> "\N{GREEK CAPITAL LETTER DELTA}"  # Using the character name
   '\u0394'
   >>> "\u0394"                          # Using a 16-bit hex value
   '\u0394'
   >>> "\U00000394"                      # Using a 32-bit hex value
   '\u0394'
277

278 279
In addition, one can create a string using the :func:`~bytes.decode` method of
:class:`bytes`.  This method takes an *encoding* argument, such as ``UTF-8``,
280
and optionally an *errors* argument.
281 282

The *errors* argument specifies the response when the input string can't be
283
converted according to the encoding's rules.  Legal values for this argument are
284
``'strict'`` (raise a :exc:`UnicodeDecodeError` exception), ``'replace'`` (use
285 286 287
``U+FFFD``, ``REPLACEMENT CHARACTER``), ``'ignore'`` (just leave the
character out of the Unicode result), or ``'backslashreplace'`` (inserts a
``\xNN`` escape sequence).
288
The following examples show the differences::
289

290
    >>> b'\x80abc'.decode("utf-8", "strict")  #doctest: +NORMALIZE_WHITESPACE
291
    Traceback (most recent call last):
292 293 294
        ...
    UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0:
      invalid start byte
295 296
    >>> b'\x80abc'.decode("utf-8", "replace")
    '\ufffdabc'
297 298
    >>> b'\x80abc'.decode("utf-8", "backslashreplace")
    '\\x80abc'
299 300
    >>> b'\x80abc'.decode("utf-8", "ignore")
    'abc'
301

Benjamin Peterson's avatar
Benjamin Peterson committed
302 303
Encodings are specified as strings containing the encoding's name.  Python 3.2
comes with roughly 100 different encodings; see the Python Library Reference at
304
:ref:`standard-encodings` for a list.  Some encodings have multiple names; for
305 306
example, ``'latin-1'``, ``'iso_8859_1'`` and ``'8859``' are all synonyms for
the same encoding.
307

308
One-character Unicode strings can also be created with the :func:`chr`
309 310 311 312 313
built-in function, which takes integers and returns a Unicode string of length 1
that contains the corresponding code point.  The reverse operation is the
built-in :func:`ord` function that takes a one-character Unicode string and
returns the code point value::

314 315 316 317
    >>> chr(57344)
    '\ue000'
    >>> ord('\ue000')
    57344
318

319 320 321
Converting to Bytes
-------------------

322 323
The opposite method of :meth:`bytes.decode` is :meth:`str.encode`,
which returns a :class:`bytes` representation of the Unicode string, encoded in the
324 325 326 327 328 329
requested *encoding*.

The *errors* parameter is the same as the parameter of the
:meth:`~bytes.decode` method but supports a few more possible handlers. As well as
``'strict'``, ``'ignore'``, and ``'replace'`` (which in this case
inserts a question mark instead of the unencodable character), there is
330 331 332
also ``'xmlcharrefreplace'`` (inserts an XML character reference),
``backslashreplace`` (inserts a ``\uNNNN`` escape sequence) and
``namereplace`` (inserts a ``\N{...}`` escape sequence).
333

334
The following example shows the different results::
335

336
    >>> u = chr(40960) + 'abcd' + chr(1972)
337
    >>> u.encode('utf-8')
338
    b'\xea\x80\x80abcd\xde\xb4'
339
    >>> u.encode('ascii')  #doctest: +NORMALIZE_WHITESPACE
340
    Traceback (most recent call last):
341
        ...
342
    UnicodeEncodeError: 'ascii' codec can't encode character '\ua000' in
343
      position 0: ordinal not in range(128)
344
    >>> u.encode('ascii', 'ignore')
345
    b'abcd'
346
    >>> u.encode('ascii', 'replace')
347
    b'?abcd?'
348
    >>> u.encode('ascii', 'xmlcharrefreplace')
349
    b'&#40960;abcd&#1972;'
350 351
    >>> u.encode('ascii', 'backslashreplace')
    b'\\ua000abcd\\u07b4'
352 353
    >>> u.encode('ascii', 'namereplace')
    b'\\N{YI SYLLABLE IT}abcd\\u07b4'
354

355 356 357 358 359 360
The low-level routines for registering and accessing the available
encodings are found in the :mod:`codecs` module.  Implementing new
encodings also requires understanding the :mod:`codecs` module.
However, the encoding and decoding functions returned by this module
are usually more low-level than is comfortable, and writing new encodings
is a specialized task, so the module won't be covered in this HOWTO.
361

362

363 364 365
Unicode Literals in Python Source Code
--------------------------------------

366 367
In Python source code, specific Unicode code points can be written using the
``\u`` escape sequence, which is followed by four hex digits giving the code
368 369
point.  The ``\U`` escape sequence is similar, but expects eight hex digits,
not four::
370

371
    >>> s = "a\xac\u1234\u20ac\U00008000"
372 373 374 375 376
    ... #     ^^^^ two-digit hex escape
    ... #         ^^^^^^ four-digit Unicode escape
    ... #                     ^^^^^^^^^^ eight-digit Unicode escape
    >>> [ord(c) for c in s]
    [97, 172, 4660, 8364, 32768]
377 378 379 380

Using escape sequences for code points greater than 127 is fine in small doses,
but becomes an annoyance if you're using many accented characters, as you would
in a program with messages in French or some other accent-using language.  You
381
can also assemble strings using the :func:`chr` built-in function, but this is
382 383 384 385 386 387 388
even more tedious.

Ideally, you'd want to be able to write literals in your language's natural
encoding.  You could then edit Python source code with your favorite editor
which would display the accented characters naturally, and have the right
characters used at runtime.

389 390 391
Python supports writing source code in UTF-8 by default, but you can use almost
any encoding if you declare the encoding being used.  This is done by including
a special comment as either the first or second line of the source file::
392 393 394

    #!/usr/bin/env python
    # -*- coding: latin-1 -*-
395

396
    u = 'abcdé'
397 398
    print(ord(u[-1]))

399 400
The syntax is inspired by Emacs's notation for specifying variables local to a
file.  Emacs supports many different variables, but Python only supports
401 402 403
'coding'.  The ``-*-`` symbols indicate to Emacs that the comment is special;
they have no significance to Python but are a convention.  Python looks for
``coding: name`` or ``coding=name`` in the comment.
404

405
If you don't include such a comment, the default encoding used will be UTF-8 as
406
already mentioned.  See also :pep:`263` for more information.
407

408 409 410 411 412

Unicode Properties
------------------

The Unicode specification includes a database of information about code points.
413
For each defined code point, the information includes the character's
414 415 416 417 418 419 420 421 422
name, its category, the numeric value if applicable (Unicode has characters
representing the Roman numerals and fractions such as one-third and
four-fifths).  There are also properties related to the code point's use in
bidirectional text and other display-related properties.

The following program displays some information about several characters, and
prints the numeric value of one particular character::

    import unicodedata
423

424
    u = chr(233) + chr(0x0bf2) + chr(3972) + chr(6000) + chr(13231)
425

426
    for i, c in enumerate(u):
427 428 429
        print(i, '%04x' % ord(c), unicodedata.category(c), end=" ")
        print(unicodedata.name(c))

430
    # Get numeric value of second character
431
    print(unicodedata.numeric(u[1]))
432

433 434 435
When run, this prints:

.. code-block:: none
436 437 438 439 440 441 442 443 444 445 446 447 448 449

    0 00e9 Ll LATIN SMALL LETTER E WITH ACUTE
    1 0bf2 No TAMIL NUMBER ONE THOUSAND
    2 0f84 Mn TIBETAN MARK HALANTA
    3 1770 Lo TAGBANWA LETTER SA
    4 33af So SQUARE RAD OVER S SQUARED
    1000.0

The category codes are abbreviations describing the nature of the character.
These are grouped into categories such as "Letter", "Number", "Punctuation", or
"Symbol", which in turn are broken up into subcategories.  To take the codes
from the above output, ``'Ll'`` means 'Letter, lowercase', ``'No'`` means
"Number, other", ``'Mn'`` is "Mark, nonspacing", and ``'So'`` is "Symbol,
other".  See
450
`the General Category Values section of the Unicode Character Database documentation <http://www.unicode.org/reports/tr44/#General_Category_Values>`_ for a
451 452
list of category codes.

453 454 455 456 457 458 459 460 461 462 463 464 465 466 467

Unicode Regular Expressions
---------------------------

The regular expressions supported by the :mod:`re` module can be provided
either as bytes or strings.  Some of the special character sequences such as
``\d`` and ``\w`` have different meanings depending on whether
the pattern is supplied as bytes or a string.  For example,
``\d`` will match the characters ``[0-9]`` in bytes but
in strings will match any character that's in the ``'Nd'`` category.

The string in this example has the number 57 written in both Thai and
Arabic numerals::

   import re
468
   p = re.compile(r'\d+')
469 470 471 472 473 474 475 476 477 478 479 480 481 482 483

   s = "Over \u0e55\u0e57 57 flavours"
   m = p.search(s)
   print(repr(m.group()))

When executed, ``\d+`` will match the Thai numerals and print them
out.  If you supply the :const:`re.ASCII` flag to
:func:`~re.compile`, ``\d+`` will match the substring "57" instead.

Similarly, ``\w`` matches a wide variety of Unicode characters but
only ``[a-zA-Z0-9_]`` in bytes or if :const:`re.ASCII` is supplied,
and ``\s`` will match either Unicode whitespace characters or
``[ \t\n\r\f\v]``.


484 485 486
References
----------

487 488 489 490 491
.. comment should these be mentioned earlier, e.g. at the start of the "introduction to Unicode" first section?

Some good alternative discussions of Python's Unicode support are:

* `Processing Text Files in Python 3 <http://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html>`_, by Nick Coghlan.
492
* `Pragmatic Unicode <https://nedbatchelder.com/text/unipain.html>`_, a PyCon 2012 presentation by Ned Batchelder.
493

494
The :class:`str` type is described in the Python library reference at
Ezio Melotti's avatar
Ezio Melotti committed
495
:ref:`textseq`.
496 497 498 499 500

The documentation for the :mod:`unicodedata` module.

The documentation for the :mod:`codecs` module.

501 502 503 504 505
Marc-André Lemburg gave `a presentation titled "Python and Unicode" (PDF slides)
<https://downloads.egenix.com/python/Unicode-EPC2002-Talk.pdf>`_ at
EuroPython 2002.  The slides are an excellent overview of the design of Python
2's Unicode features (where the Unicode string type is called ``unicode`` and
literals start with ``u``).
506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522


Reading and Writing Unicode Data
================================

Once you've written some code that works with Unicode data, the next problem is
input/output.  How do you get Unicode strings into your program, and how do you
convert Unicode into a form suitable for storage or transmission?

It's possible that you may not need to do anything depending on your input
sources and output destinations; you should check whether the libraries used in
your application support Unicode natively.  XML parsers often return Unicode
data, for example.  Many relational databases also support Unicode-valued
columns and can return Unicode values from an SQL query.

Unicode data is usually converted to a particular encoding before it gets
written to disk or sent over a socket.  It's possible to do all the work
Georg Brandl's avatar
Georg Brandl committed
523
yourself: open a file, read an 8-bit bytes object from it, and convert the bytes
524
with ``bytes.decode(encoding)``.  However, the manual approach is not recommended.
525 526 527

One problem is the multi-byte nature of encodings; one Unicode character can be
represented by several bytes.  If you want to read the file in arbitrary-sized
528
chunks (say, 1024 or 4096 bytes), you need to write error-handling code to catch the case
529 530 531
where only part of the bytes encoding a single Unicode character are read at the
end of a chunk.  One solution would be to read the entire file into memory and
then perform the decoding, but that prevents you from working with files that
532
are extremely large; if you need to read a 2 GiB file, you need 2 GiB of RAM.
533 534 535 536 537
(More, really, since for at least a moment you'd need to have both the encoded
string and its Unicode version in memory.)

The solution would be to use the low-level decoding interface to catch the case
of partial coding sequences.  The work of implementing this has already been
538 539
done for you: the built-in :func:`open` function can return a file-like object
that assumes the file's contents are in a specified encoding and accepts Unicode
540
parameters for methods such as :meth:`~io.TextIOBase.read` and
541
:meth:`~io.TextIOBase.write`.  This works through :func:`open`\'s *encoding* and
542 543
*errors* parameters which are interpreted just like those in :meth:`str.encode`
and :meth:`bytes.decode`.
544 545 546

Reading Unicode from a file is therefore simple::

547
    with open('unicode.txt', encoding='utf-8') as f:
548 549
        for line in f:
            print(repr(line))
550 551 552 553

It's also possible to open files in update mode, allowing both reading and
writing::

554 555 556 557
    with open('test', encoding='utf-8', mode='w+') as f:
        f.write('\u4500 blah blah blah\n')
        f.seek(0)
        print(repr(f.readline()[:1]))
558

559
The Unicode character ``U+FEFF`` is used as a byte-order mark (BOM), and is often
560 561 562 563 564 565 566 567
written as the first character of a file in order to assist with autodetection
of the file's byte ordering.  Some encodings, such as UTF-16, expect a BOM to be
present at the start of a file; when such an encoding is used, the BOM will be
automatically written as the first character and will be silently dropped when
the file is read.  There are variants of these encodings, such as 'utf-16-le'
and 'utf-16-be' for little-endian and big-endian encodings, that specify one
particular byte ordering and don't skip the BOM.

568 569 570 571 572 573
In some areas, it is also convention to use a "BOM" at the start of UTF-8
encoded files; the name is misleading since UTF-8 is not byte-order dependent.
The mark simply announces that the file is encoded in UTF-8.  Use the
'utf-8-sig' codec to automatically skip the mark if present for reading such
files.

574 575 576 577 578 579 580

Unicode filenames
-----------------

Most of the operating systems in common use today support filenames that contain
arbitrary Unicode characters.  Usually this is implemented by converting the
Unicode string into some encoding that varies depending on the system.  For
581
example, Mac OS X uses UTF-8 while Windows uses a configurable encoding; on
582 583 584
Windows, Python uses the name "mbcs" to refer to whatever the currently
configured encoding is.  On Unix systems, there will only be a filesystem
encoding if you've set the ``LANG`` or ``LC_CTYPE`` environment variables; if
585
you haven't, the default encoding is UTF-8.
586 587 588 589 590 591 592

The :func:`sys.getfilesystemencoding` function returns the encoding to use on
your current system, in case you want to do the encoding manually, but there's
not much reason to bother.  When opening a file for reading or writing, you can
usually just provide the Unicode string as the filename, and it will be
automatically converted to the right encoding for you::

593
    filename = 'filename\u4500abc'
594 595
    with open(filename, 'w') as f:
        f.write('blah\n')
596 597 598 599

Functions in the :mod:`os` module such as :func:`os.stat` will also accept Unicode
filenames.

600
The :func:`os.listdir` function returns filenames and raises an issue: should it return
601
the Unicode version of filenames, or should it return bytes containing
602
the encoded versions?  :func:`os.listdir` will do both, depending on whether you
603
provided the directory path as bytes or a Unicode string.  If you pass a
604 605
Unicode string as the path, filenames will be decoded using the filesystem's
encoding and a list of Unicode strings will be returned, while passing a byte
606
path will return the filenames as bytes.  For example,
607 608
assuming the default filesystem encoding is UTF-8, running the following
program::
609

610 611 612
   fn = 'filename\u4500abc'
   f = open(fn, 'w')
   f.close()
613

614 615 616
   import os
   print(os.listdir(b'.'))
   print(os.listdir('.'))
617

618 619 620
will produce the following output:

.. code-block:: shell-session
621

622
   amk:~$ python t.py
623 624
   [b'filename\xe4\x94\x80abc', ...]
   ['filename\u4500abc', ...]
625 626 627 628

The first list contains UTF-8-encoded filenames, and the second list contains
the Unicode versions.

629
Note that on most occasions, the Unicode APIs should be used.  The bytes APIs
630 631 632
should only be used on systems where undecodable file names can be present,
i.e. Unix systems.

633 634 635 636 637 638 639 640 641

Tips for Writing Unicode-aware Programs
---------------------------------------

This section provides some suggestions on writing software that deals with
Unicode.

The most important tip is:

642 643
    Software should only work with Unicode strings internally, decoding the input
    data as soon as possible and encoding the output only at the end.
644

645
If you attempt to write processing functions that accept both Unicode and byte
646
strings, you will find your program vulnerable to bugs wherever you combine the
647 648
two different kinds of strings.  There is no automatic encoding or decoding: if
you do e.g. ``str + bytes``, a :exc:`TypeError` will be raised.
649 650 651 652

When using data coming from a web browser or some other untrusted source, a
common technique is to check for illegal characters in a string before using the
string in a generated command line or storing it in a database.  If you're doing
653 654 655 656 657 658
this, be careful to check the decoded string, not the encoded bytes data;
some encodings may have interesting properties, such as not being bijective
or not being fully ASCII-compatible.  This is especially true if the input
data also specifies the encoding, since the attacker can then choose a
clever way to hide malicious text in the encoded bytestream.

659

660 661 662 663 664 665 666 667
Converting Between File Encodings
'''''''''''''''''''''''''''''''''

The :class:`~codecs.StreamRecoder` class can transparently convert between
encodings, taking a stream that returns data in encoding #1
and behaving like a stream returning data in encoding #2.

For example, if you have an input file *f* that's in Latin-1, you
668 669
can wrap it with a :class:`~codecs.StreamRecoder` to return bytes encoded in
UTF-8::
670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693

    new_f = codecs.StreamRecoder(f,
        # en/decoder: used by read() to encode its results and
        # by write() to decode its input.
        codecs.getencoder('utf-8'), codecs.getdecoder('utf-8'),

        # reader/writer: used to read and write to the stream.
        codecs.getreader('latin-1'), codecs.getwriter('latin-1') )


Files in an Unknown Encoding
''''''''''''''''''''''''''''

What can you do if you need to make a change to a file, but don't know
the file's encoding?  If you know the encoding is ASCII-compatible and
only want to examine or modify the ASCII parts, you can open the file
with the ``surrogateescape`` error handler::

   with open(fname, 'r', encoding="ascii", errors="surrogateescape") as f:
       data = f.read()

   # make changes to the string 'data'

   with open(fname + '.new', 'w',
694
             encoding="ascii", errors="surrogateescape") as f:
695 696 697 698 699 700 701 702 703
       f.write(data)

The ``surrogateescape`` error handler will decode any non-ASCII bytes
as code points in the Unicode Private Use Area ranging from U+DC80 to
U+DCFF.  These private code points will then be turned back into the
same bytes when the ``surrogateescape`` error handler is used when
encoding the data and writing it back out.


704 705 706
References
----------

707 708 709
One section of `Mastering Python 3 Input/Output
<http://pyvideo.org/video/289/pycon-2010--mastering-python-3-i-o>`_,
a PyCon 2010 talk by David Beazley, discusses text processing and binary data handling.
710

711 712 713
The `PDF slides for Marc-André Lemburg's presentation "Writing Unicode-aware
Applications in Python"
<https://downloads.egenix.com/python/LSM2005-Developing-Unicode-aware-applications-in-Python.pdf>`_
714
discuss questions of character encodings as well as how to internationalize
715
and localize an application.  These slides cover Python 2.x only.
716

717 718 719 720
`The Guts of Unicode in Python
<http://pyvideo.org/video/1768/the-guts-of-unicode-in-python>`_
is a PyCon 2013 talk by Benjamin Peterson that discusses the internal Unicode
representation in Python 3.3.
721

722

723 724
Acknowledgements
================
725

726 727 728 729 730 731 732 733
The initial draft of this document was written by Andrew Kuchling.
It has since been revised further by Alexander Belopolsky, Georg Brandl,
Andrew Kuchling, and Ezio Melotti.

Thanks to the following people who have noted errors or offered
suggestions on this article: Éric Araujo, Nicholas Bastin, Nick
Coghlan, Marius Gedminas, Kent Johnson, Ken Krugler, Marc-André
Lemburg, Martin von Löwis, Terry J. Reedy, Chad Whitacre.