Re: Python crash course

Input and Output

IO (or I/O) (input/output) is the concept of reading data (the "input") from a file or stream and sending the data to a destination (the "output"). IO can be done in many different ways. In this tutorial, we'll cover IO using the print statement, input() function call, and IO using files.

Using print to send output to a file or stream

In Python 3.x, the print statement's syntax looks like this:

print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)

You will notice several strange things about the syntax of this functions. The *objects parameter might look out of place because from what I can tell, it has not been referenced. The * operator in a function parameter list means that the rest of the parameter list is infinite. Basically, print() could take an indefinite number of arguments.
The sep=' ' parameter is the separator between parameters. This means that after each parameter that is not a parameter that is sep, end, file, or flush, a space is placed. This can be changed to anything you like.
The end='\n' parameter is the character (or characters) that is or are placed when the statement ends. By default, a \n (newline) is placed, causing the operating system to push up all the text previously entered and create an empty space below the text at the bottom of the screen for the cursor to be placed. After that's done, the cursor is set back at the beginning of the empty space created on the screen and is ready for more input.
the file=sys.stdout parameter is the stream or file (or location) for the output to be sent. This means that the print function could be technically be used to print data to files, although the file operations (covered later in this post) is recommended, as it is quicker and more efficient.
The flush=false parameter indicates if all data should be placed into a buffer, and then sent to the location specified in the file parameter or immediately sent to the location specified in the file parameter without buffering it. Usually, this parameter is set to false because buffering allows a memory space to be created for the text to be sent to allow formatting and string concatenation to be completed. If set to true, the entire string will be sent, which may or may not cause errors in the string sent.
An example print statement would be as follows:

print ("Hello world!")

An example print statement using separators and end parameter definitions would be as follows:

print ("Hello world!", sep="\0", end="\n")

In this statement, the system sends the null character, \0, as the separator, and sends a newline (\n) as an ending line terminator.
In python 2.x, the print statement is as follows:

print(*objects, sep=' ', end='\n', file=sys.stdout)

As you can see, there is no flush parameter, forcing input to always be buffered.

Reading and writing to files

Reading and writing to files is similar to using standard IO. In this tutorial, I'll use open() to write to files, as it allows one of the cleanest methods of writing to and reading from files.
The syntax for open() is as follows:
open():

open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)

Open() opens file and returns a corresponding file object. If the file cannot be opened, an OSError is raised.
file is either a string or bytes object giving the pathname (absolute or relative to the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.)
mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists), 'x' for exclusive creation and 'a' for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are:
'r'
open for reading (default)
'w'
open for writing, truncating the file first
'x'
open for exclusive creation, failing if the file already exists
'a'
open for writing, appending to the end of the file if it exists
'b'
binary mode
't'
text mode (default)
'+'
open a disk file for updating (reading and writing)
'U'
universal newlines mode (deprecated)
The default mode is 'r' (open for reading text, synonym of 'rt'). For binary read-write access, the mode 'w+b' opens and truncates the file to 0 bytes. 'r+b' opens the file without truncation.
As mentioned in the Overview in the Python documentation, Python distinguishes between binary and text I/O. Files opened in binary mode (including 'b' in the mode argument) return contents as bytes objects without any decoding. In text mode (the default, or when 't' is included in the mode argument), the contents of the file are returned as str, the bytes having been first decoded using a platform-dependent encoding or using the specified encoding if given.
Note that Python doesn’t depend on the underlying operating system’s notion of text files; all the processing is done by Python itself, and is therefore platform-independent.
buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off (only allowed in binary mode), 1 to select line buffering (only usable in text mode), and an integer > 1 to indicate the size in bytes of a fixed-size chunk buffer. When no buffering argument is given, the default buffering policy works as follows:

  • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On many systems, the buffer will typically be 4096 or 8192 bytes long.

  • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files.
    encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent (whatever locale.getpreferredencoding() returns), but any text encoding supported by Python can be used. See the codecs module for the list of supported encodings.

errors is an optional string that specifies how encoding and decoding errors are to be handled--this cannot be used in binary mode. A variety of standard error handlers are available (listed under Error Handlers), though any error handling name that has been registered with codecs.register_error() is also valid. The standard names include:

  • 'strict' to raise a ValueError exception if there is an encoding error. The default value of None has the same effect.

  • 'ignore' ignores errors. Note that ignoring encoding errors can lead to data loss.

  • 'replace' causes a replacement marker (such as '?') to be inserted where there is malformed data.

  • 'surrogateescape' will represent any incorrect bytes as code points in the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These private code points will then be turned back into the same bytes when the surrogateescape error handler is used when writing data. This is useful for processing files in an unknown encoding.

  • 'xmlcharrefreplace' is only supported when writing to a file. Characters not supported by the encoding are replaced with the appropriate XML character reference &#nnn;.

  • 'backslashreplace' replaces malformed data by Python’s backslashed escape sequences.

  • 'namereplace' (also only supported when writing) replaces unsupported characters with \N{...} escape sequences.

newline controls how universal newlines mode works (it only applies to text mode). It can be None, '', '\n', '\r', and '\r\n'. It works as follows:

  • When reading input from the stream, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newlines mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated.

  • When writing output to the stream, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '' or '\n', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string.

If closefd is False and a file descriptor rather than a filename was given, the underlying file descriptor will be kept open when the file is closed. If a filename is given closefd must be True (the default) otherwise an error will be raised.
A custom opener can be used by passing a callable as opener. The underlying file descriptor for the file object is then obtained by calling opener with (file, flags). opener must return an open file descriptor (passing os.open as opener results in functionality similar to passing None).
The newly created file is non-inheritable.
The following example uses the dir_fd parameter of the os.open() function to open a file relative to a given directory:

>>> import os
>>> dir_fd = os.open('somedir', os.O_RDONLY)
>>> def opener(path, flags):
...     return os.open(path, flags, dir_fd=dir_fd)
...
>>> with open('spamspam.txt', 'w', opener=opener) as f:
...     print('This will be written to somedir/spamspam.txt', file=f)
...
>>> os.close(dir_fd)  # don't leak a file descriptor

The type of file object returned by the open() function depends on the mode. When open() is used to open a file in a text mode ('w', 'r', 'wt', 'rt', etc.), it returns a subclass of io.TextIOBase (specifically io.TextIOWrapper). When used to open a file in a binary mode with buffering, the returned class is a subclass of io.BufferedIOBase. The exact class varies: in read binary mode, it returns a io.BufferedReader; in write binary and append binary modes, it returns a io.BufferedWriter, and in read/write mode, it returns a io.BufferedRandom. When buffering is disabled, the raw stream, a subclass of io.RawIOBase, io.FileIO, is returned.

Using asynchronous functions, loops, and context managers

Python 3.5 adds the 'async def', 'async with', and 'async for' syntax, allowing the creation of asynchronous functions, context managers, and for loops. It is unknown if Python will add an 'async while' syntax.
The asynchronous function definition is as follows:

async def <function name> ():
# code

or

async def <function name> (<parameters):

Below is the full text of the python change for this addition:

PEP 492 greatly improves support for asynchronous programming in Python by adding awaitable objects, coroutine functions, asynchronous iteration, and asynchronous context managers.
Coroutine functions are declared using the new asyncdef syntax:
>>> async def coro():
...     return 'spam'
Inside a coroutine function, the new await _expression_ can be used to suspend coroutine execution until the result is available. Any object can be awaited, as long as it implements the awaitable protocol by defining the __await__() method.
PEP 492 also adds the async for statement for convenient iteration over asynchronous iterables.
An example of a simple HTTP client written using the new syntax:

import asyncio

async def http_get(domain):
    reader, writer = await asyncio.open_connection(domain, 80)

    writer.write(b'\r\n'.join([
        b'GET / HTTP/1.1',
        b'Host: %b' % domain.encode('latin-1'),
        b'Connection: close',
        b'', b''
    ]))

    async for line in reader:
        print('>>>', line)

    writer.close()

loop = asyncio.get_event_loop()
try:
    loop.run_until_complete(http_get('example.com'))
finally:
    loop.close()

Similarly to asynchronous iteration, there is a new syntax for asynchronous context managers. The following script:

import asyncio

async def coro(name, lock):
    print('coro {}: waiting for lock'.format(name))
    async with lock:
        print('coro {}: holding the lock'.format(name))
        await asyncio.sleep(1)
        print('coro {}: releasing the lock'.format(name))

loop = asyncio.get_event_loop()
lock = asyncio.Lock()
coros = asyncio.gather(coro(1, lock), coro(2, lock))
try:
    loop.run_until_complete(coros)
finally:
    loop.close()

will print:
coro 2: waiting for lock
coro 2: holding the lock
coro 1: waiting for lock
coro 2: releasing the lock
coro 1: holding the lock
coro 1: releasing the lock
Note that both async for and async with can only be used inside a coroutine function declared with async def.
Coroutine functions are intended to be run inside a compatible event loop, such as asyncio.Loop.

_______________________________________________
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector
  • ... AudioGames . net Forum — Developers room : CAE_Jones via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : momo7807 via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : CAE_Jones via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : Ethin via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : dhruv via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : dhruv via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : frastlin via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : dhruv via Audiogames-reflector
    • ... AudioGames . net Forum — Developers room : Ethin via Audiogames-reflector

Reply via email to