
-
E Ink debuts a new electronic drawing technologyFriday, 30 November 2018E Ink — a name synonymous with e-reader screens — just debuted a new writing display technology called JustWrite. The tech offers the company’s familiar monochrome aesthetic — albeit in negative this time, with white on black. The key here, as with most of E Ink’s technology, is minimal power consumption and low cost, the […]
-
Facebook adds free TV shows Buffy, Angel, Firefly to redefine WatchFriday, 30 November 2018Facebook hasn’t had a hit show yet for its long-form video hub Watch, so it’s got a new plan: digging up some deceased cult favorites from television. First up, Facebook is making all episodes of Joss Whedon’s Buffy The Vampire Slayer, Angel, and Firefly free on Facebook Watch. There’ll be simultaneous viewing Watch Parties where […]
-
PayPal: Black Friday & Cyber Monday broke records with $1B+ in mobile payment volumeFriday, 30 November 2018Black Friday broke records in terms of sales made from mobile devices, according to reports last week from Adobe. This week, PayPal said it saw a similar trend during the Thanksgiving to Cyber Monday shopping event. PayPal saw a record-breaking $1 billion+ in mobile payment volume for the first time ever on Black Friday — a […]
-
The International Space Station’s new robot is a freaky floating space AlexaFriday, 30 November 2018Meet Cimon. The 3D-printed floating robot head was developed by Airbus for the German Space Agency. He’s been a crew member of the International Space Station since June, though as Gizmodo notes, this is the first time we’re seeing him in action. Really the floating, Watson-powered robot face is like an extremely expensive Amazon Echo […]
-
Niantic confirms that Pokémon GO is getting PvP battles ‘soon’Friday, 30 November 2018Two and a half years after the launch of Pokémon GO, it’s still missing one major staple of the main series games: player versus player battling. That’s about to change. In a series of teaser tweets this morning, the company confirmed that the battle system is on the way, noting only that it’s “coming soon.” […]
-
The Arlo security camera goes 4KFriday, 30 November 2018The Arlo line was something of a surprise hit for Netgear, causing the networking company to spin it off into its own business earlier this year. The Arlo ecosystem is one of the most robust in the smart security camera space, and now it’s getting something it had never had before: 4K. The new Arlo […]
-
AT&T details its streaming service plans as it weighs a sale of its Hulu stakeFriday, 30 November 2018AT&T may be ready to sell its stake in Hulu, the company revealed in an analyst presentation on Thursday. The company currently owns a 10 percent stake in the service by way of WarnerMedia, as a result of its Time Warner acquisition. But AT&T today is running its own streaming services, including live TV service […]
You can add your custom AD message here

-
Stack Abuse: Handling Unix Signals in PythonFriday, 30 November 2018UNIX/Linux systems offer special mechanisms to communicate between each individual process. One of these mechanisms are signals, and belong to the different methods of communication between processes (Inter Process Communication, abbreviated with IPC). In short, signals are software interrupts that are sent to the program (or the process) to notify the program of significant events or requests to the program in order to run a special code sequence. A program that receives a signal either stops or continues the execution of its instructions, terminates either with or without a memory dump, or even simply ignores the signal. Although it is defined in the POSIX standard, the reaction actually depends on how the developer wrote the script and implemented the handling of signals. In this article we explain what are signals, show you how to sent a signal to another process from the command line as well as processing the received signal. Among other modules, the program code is mainly based on the signal module. This module connects the according C headers of your operating system with the Python world. An Introduction to Signals On UNIX-based systems, there are three categories of signals: System signals (hardware and system errors): SIGILL, SIGTRAP, SIGBUS, SIGFPE, SIGKILL, SIGSEGV, SIGXCPU, SIGXFSZ, SIGIO Device signals: SIGHUP, SIGINT, SIGPIPE, SIGALRM, SIGCHLD, SIGCONT, SIGSTOP, SIGTTIN, SIGTTOU, SIGURG, SIGWINCH, SIGIO User-defined signals: SIGQUIT, SIGABRT, SIGUSR1, SIGUSR2, SIGTERM Each signal is represented by an integer value, and the list of signals that are available is comparably long and not consistent between the different UNIX/Linux variants. On a Debian GNU/Linux system, the command kill -l displays the list of signals as follows: $ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAX The signals 1 to 15 are roughly standardized, and have the following meaning on most of the Linux systems: 1 (SIGHUP): terminate a connection, or reload the configuration for daemons 2 (SIGINT): interrupt the session from the dialogue station 3 (SIGQUIT): terminate the session from the dialogue station 4 (SIGILL): illegal instruction was executed 5 (SIGTRAP): do a single instruction (trap) 6 (SIGABRT): abnormal termination 7 (SIGBUS): error on the system bus 8 (SIGFPE): floating point error 9 (SIGKILL): immmediately terminate the process 10 (SIGUSR1): user-defined signal 11 (SIGSEGV): segmentation fault due to illegal access of a memory segment 12 (SIGUSR2): user-defined signal 13 (SIGPIPE): writing into a pipe, and nobody is reading from it 14 (SIGALRM): the timer terminated (alarm) 15 (SIGTERM): terminate the process in a soft way In order to send a signal to a process in a Linux terminal you invoke the kill command with both the signal number (or signal name) from the list above and the id of the process (pid). The following example command sends the signal 15 (SIGTERM) to the process that has the pid 12345: $ kill -15 12345 An equivalent way is to use the signal name instead of its number: $ kill -SIGTERM 12345 Which way you choose depends on what is more convenient for you. Both ways have the same effect. As a result the process receives the signal SIGTERM, and terminates immediately. Using the Python signal Library Since Python 1.4, the signal library is a regular component of every Python release. In order to use the signal library, import the library into your Python program as follows, first: import signal Capturing and reacting properly on a received signal is done by a callback function - a so-called signal handler. A rather simple signal handler named receiveSignal() can be written as follows: def receiveSignal(signalNumber, frame): print('Received:', signalNumber) return This signal handler does nothing else than reporting the number of the received signal. The next step is registering the signals that are caught by the signal handler. For Python programs, all the signals (but 9, SIGKILL) can be caught in your script: if __name__ == '__main__': # register the signals to be caught signal.signal(signal.SIGHUP, receiveSignal) signal.signal(signal.SIGINT, receiveSignal) signal.signal(signal.SIGQUIT, receiveSignal) signal.signal(signal.SIGILL, receiveSignal) signal.signal(signal.SIGTRAP, receiveSignal) signal.signal(signal.SIGABRT, receiveSignal) signal.signal(signal.SIGBUS, receiveSignal) signal.signal(signal.SIGFPE, receiveSignal) #signal.signal(signal.SIGKILL, receiveSignal) signal.signal(signal.SIGUSR1, receiveSignal) signal.signal(signal.SIGSEGV, receiveSignal) signal.signal(signal.SIGUSR2, receiveSignal) signal.signal(signal.SIGPIPE, receiveSignal) signal.signal(signal.SIGALRM, receiveSignal) signal.signal(signal.SIGTERM, receiveSignal) Next, we add the process information for the current process, and detect the process id using the methode getpid() from the os module. In an endless while loop we wait for incoming signals. We implement this using two more Python modules - os and time. We import them at the beginning of our Python script, too: import os import time In the while loop of our main program the print statement outputs "Waiting...". The time.sleep() function call makes the program wait for three seconds. # output current process id print('My PID is:', os.getpid()) # wait in an endless loop for signals while True: print('Waiting...') time.sleep(3) Finally, we have to test our script. Having saved the script as signal-handling.py we can invoke it in a terminal as follows: $ python3 signal-handling.py My PID is: 5746 Waiting... ... In a second terminal window we send a signal to the process. We identify our first process - the Python script - by the process id as printed on screen, above. $ kill -1 5746 The signal event handler in our Python program receives the signal we have sent to the process. It reacts accordingly, and simply confirms the received signal: ... Received: 1 ... Ignoring Signals The signal module defines ways to ignore received signals. In order to do that the signal has to be connected with the predefined function signal.SIG_IGN. The example below demonstrates that, and as a result the Python program cannot be interrupted by CTRL+C anymore. To stop the Python script an alternative way has been implemented in the example script - the signal SIGUSR1 terminates the Python script. Furthermore, instead of an endless loop we use the method signal.pause(). It just waits for a signal to be received. import signal import os import time def receiveSignal(signalNumber, frame): print('Received:', signalNumber) raise SystemExit('Exiting') return if __name__ == '__main__': # register the signal to be caught signal.signal(signal.SIGUSR1, receiveSignal) # register the signal to be ignored signal.signal(signal.SIGINT, signal.SIG_IGN) # output current process id print('My PID is:', os.getpid()) signal.pause() Handling Signals Properly The signal handler we have used up to now is rather simple, and just reports a received signal. This shows us that the interface of our Python script is working fine. Let's improve it. Catching the signal is already a good basis but requires some improvement to comply with the rules of the POSIX standard. For a higher accuracy each signal needs a proper reaction (see list above). This means that the signal handler in our Python script needs to be extended by a specific routine per signal. This works best if we understand what a signal does, and what a common reaction is. A process that receives signal 1, 2, 9 or 15 terminates. In any other case it is expected to write a core dump, too. Up to now we have implemented a single routine that covers all the signals, and handles them in the same way. The next step is to implement an individual routine per signal. The following example code demonstrates this for the signals 1 (SIGHUP) and 15 (SIGTERM). def readConfiguration(signalNumber, frame): print ('(SIGHUP) reading configuration') return def terminateProcess(signalNumber, frame): print ('(SIGTERM) terminating the process') sys.exit() The two functions above are connected with the signals as follows: signal.signal(signal.SIGHUP, readConfiguration) signal.signal(signal.SIGTERM, terminateProcess) Running the Python script, and sending the signal 1 (SIGHUP) followed by a signal 15 (SIGTERM) by the UNIX commands kill -1 16640 and kill -15 16640 results in the following output: $ python3 daemon.py My PID is: 16640 Waiting... Waiting... (SIGHUP) reading configuration Waiting... Waiting... (SIGTERM) terminating the process The script receives the signals, and handles them properly. For clarity, this is the entire script: import signal import os import time import sys def readConfiguration(signalNumber, frame): print ('(SIGHUP) reading configuration') return def terminateProcess(signalNumber, frame): print ('(SIGTERM) terminating the process') sys.exit() def receiveSignal(signalNumber, frame): print('Received:', signalNumber) return if __name__ == '__main__': # register the signals to be caught signal.signal(signal.SIGHUP, readConfiguration) signal.signal(signal.SIGINT, receiveSignal) signal.signal(signal.SIGQUIT, receiveSignal) signal.signal(signal.SIGILL, receiveSignal) signal.signal(signal.SIGTRAP, receiveSignal) signal.signal(signal.SIGABRT, receiveSignal) signal.signal(signal.SIGBUS, receiveSignal) signal.signal(signal.SIGFPE, receiveSignal) #signal.signal(signal.SIGKILL, receiveSignal) signal.signal(signal.SIGUSR1, receiveSignal) signal.signal(signal.SIGSEGV, receiveSignal) signal.signal(signal.SIGUSR2, receiveSignal) signal.signal(signal.SIGPIPE, receiveSignal) signal.signal(signal.SIGALRM, receiveSignal) signal.signal(signal.SIGTERM, terminateProcess) # output current process id print('My PID is:', os.getpid()) # wait in an endless loop for signals while True: print('Waiting...') time.sleep(3) Further Reading Using the signal module and an according event handler it is relatively easy to catch signals. Knowing the meaning of the different signals, and to react properly as defined in the POSIX standard is the next step. It requires that the event handler distinguishes between the different signals, and has a separate routine for all of them.
-
Codementor: Subtleties of PythonFriday, 30 November 2018A good software engineer understands how crucial attention to detail is; minute details, if overlooked, can make a world of difference between a working unit and a disaster. That’s why writing...
-
Shannon -jj Behrens: PyCon Notes: PostgreSQL Proficiency for Python PeopleFriday, 30 November 2018In summary, this tutorial was fantastic! I learned more in three hours than I would have learned if I had read a whole book!Here's the video. Here are the slides. Here are my notes:Christophe Pettus was the speaker. He's from PostgreSQL Experts.PostgreSQL is a rich environment.It's fully ACID compliant.It has the richest set of features of any modern, production RDMS. It has even more features thanOracle.PostgreSQL focuses on quality, security, and spec compliance.It's capable of very high performance: tens of thousands of transactions per second, petabyte-sized data sets, etc.To install it, just use your package management system (apt, yum, etc.). Those systems will usually take care of initialization.There are many options for OS X. Heroku even built a Postgres.app that runs more like a foreground app.A "cluster" is a single PostgreSQL server (which can manage multiple databases).initdb creates the basic file structure. PostgreSQL has to be up and running to run initdb.To create a database:sudo su - postgrespsqlcreate database this_new_database;To drop a database:drop database this_new_database;Debian runs initdb for you. Red Hat does not.Debian has a cluster management system. Use it. See, for instance, pg_createcluster.Always create databases as UTF-8. Once you've created it, you can't change it.Don't use SQLASCII. It's a nightmare. Don't use "C locale".pg_ctl is a built-in command to start and stop PostgreSQL:cd POSTGRES_DIRECTORYpg_ctl -D . startUsually, pg_ctl is wrapped by something provided by your platform.On Ubuntu, start PostgreSQL via:service postgresql startAlways use "-m fast" when stopping.Postgres puts its own data in a top-level directory. Let's call it $PGDATA.Don't monkey around with that data.pg_clog and pg_xlog are important. Don't mess with them.On most systems, configuration lives in $PGDATA.postgresql.conf contains server configuration.pg_hba.conf contains authentication settings.postgresql.conf can feel very overwhelming.Avoid making a lot of changes to postgresql.conf. Instead, add the following to it:include "postgresql.conf.include"Then, mess with "postgresql.conf.include".The important parameters fall into these categories: logging, memory, checkpoints, and the planner.Logging:Be generous with logging. It has a very low impact on the system. It's your best source of info for diagnosing problems.You can log to syslog or log CSV to files. He showed his typical logging configuration.He showed his guidelines / heuristics for all the settings, including how to finetune things. They're really good! See his slides.As of version 9.3, you don't need to tweak Linux kernel parameters anymore.Do not mess with fsync or synchronous_commit.Most settings require a server reload to take effect. Some things require a server restart. Some can be set on a per-session basis. Here's how to do that. This is also an example of how to use a transaction:begin;set local random_page_cost = 2.5;show random_page_cost;abort;pg_hba.conf contains users and roles. Roles are like groups. They form a hierarchy.A user is just a role with login privs.Don't use the "postgres" superuser for anything application-related.Sadly, you probably will have to grant schema-modification privs to your app user if you use migrations, but if you don't have to, don't.By default, DB traffic is not encrypted. Turn on SSL if you are running in a cloud provider.In pg_hba.conf, "trust" means if they can log into the server, they can access Postgres too. "peer" means they can have a Postgres user that matches their username. "md5" is an md5 hash password.It's a good idea to restrict the IP addresses allowed to talk to the server fairly tightly.The WALThe Write-Ahead Log is key to many Postgres operations. It's the basis for replication, crash recovery, etc.When each transaction is committed, it is logged to the write-ahead log.Changes in the transaction are flushed to disk.If the system crashes, the WAL is "replayed" to bring the DB to a consistent state.It's a continuous record of changes since the last checkpoint.The WAL is stored in 16MB segments in the pg_xlog directory.Never delete anything from pg_xlog.archive_command is a way to move the WAL segments to someplace safe (like adifferent system).By default, synchronous_commit is on, which means that commits do not return until the WAL flush is done. If you turn it off, they'll return when the WAL flush is queued. You might lose transactions in the case of a crash, but there's no risk of database corruption.Backup and RecoveryExperience has shown that 20% of the time, your EBS volumes will not reattach when you reboot in AWS.pg_dump is a built-in dump/restore tool.It takes a logical snapshot of the database.It doesn't lock the database or prevent writes to disk.pg_restore restores the database. It's not fast.It's great for simple backups but not suitable for fast recovery from major failures.pg_bench is the built in benchmarking tool.pg_dump -Fc --verbose example > example.dumpWithout the -Fc, it dumps SQL commands instead of its custom format.pg_restore --dbname=example_restored --verbose example.dumppg_restore takes a long time because it has to recreate indexes.pg_dumpall --globals-onlyBack up each database with pg_dump using --format=custom.To do a parallel restore, use --jobs=.If you have a large database, pg_dump may not be appropriate.A disk snapshot + every WAL segment is enough to recreate the database.To start a PITR (point in time recovery) backup: select pg_start_backup(...); Copy the disk image and any WAL files that are created. select pg_stop_backup(); Make sure you have all the WAL segments. The disk image + all the WAL segments are enough to create the DB.See also github.com/wal-e/wal-e. It's highly recommended.It automates backups to S3.He explained how to do a PITR. With PITR, you can rollback to a particular point in time. You don't have to replay everything.This is super handy for application failures.RDS is something that scripts all this stuff for you.ReplicationSend the WAL to another server.Keep the server up to date with the primary server.That's how PostgreSQL replication works.The old way was called "WAL Archiving". Each 16MB segment was sent to the secondary when complete. Use rsync, WAL-E, etc., not scp.The new way is Streaming Replication.The secondary gets changes as they happen.It's all setup via recovery.conf in your $PGDATA.He showed a recovery.conf for a secondary machine, and showed how to let it become the master.Always have a disaster recovery strategy.pg_basebackup is a utility for doing a snapshot of a running server. It's the easiest way to take a snapshot to start a new secondary. It's also useful for archival backups. It's not the fastest thing, but it's pretty foolproof.Replication: The good: Easy to setup. Schema changes are replicated. Secondaries can handle read-only queries for load balancing. It either works or it complains loudly. The bad: You get the entire DB cluster or none of it. No writes of any kind to the secondary, not even temporary tables. Some things aren't replicated like temporary tables and unlogged tables.His advice is to start with WAL-E. The README tells you everything. It fixes a ton of problems.The biggest problem with WAL-E is that writing to S3 can be slow.Another way to do funky things is trigger-based replication. There's a bunch of third-party packages to do this.Bucardo is one that lets you do multi-master setups.However, they're fiddly and complex to set up. They can also fail quietly.Transactions, MVCC, and VacuumBEGIN;INSERT ...;INSERT ...;COMMIT;By the way, no bank works this way ;)Everything runs inside of a transaction.If there is no explicit transaction, each statement is wrapped in one for you.Everything that modifies the database is transactional, even schema changes.\d shows you all your tables.With a transaction, you can even rollback a table drop.South (the Django migration tool) runs the whole migration in a single transaction.Many resources are held until the end of a transaction. Keep your transactions brief and to the point.Beware of "IDLE IN TRANSACTION" sessions. This is a problem for Django apps.A tuple in Postgres is the same thing as a row.Postgres uses Multi-Version Concurrency Control. Each transaction sees its own version of the database.Writers only block writers to the same tuple. Nothing else causes blocking.Postgres will not allow two snapshots to "fork" the database. If two people try to write to the same tuple, Postgres will block one of them.There are higher isolation modes. His description of them was really interesting.He suggested that new apps use SERIALIZABLE. This will help you find the concurrency errors in your app.Deleted tuples are not usually immediately freed.Vacuum's primary job is to scavenge tuples that are no longer visible to any transaction.autovacuum generally handles this problem for you without intervention (since version 8).Run analyze after a major database change to help the planner out.If someone tells you "vacuum's not working", they're probably wrong.The DB generally stabilizes at 20% to 50% bloat. That's acceptable.The problem might be that there are long-running transactions or idle-in-transaction sessions. They'll block vacuuming. So will manual table locking.He talked about vacuum issues for rare situations.Schema DesignNormalization is important, but don't obsess about it.Pick "entities". Make sure that no entity-level info gets pushed into the subsidiary items.Pick a naming scheme and stick with it.Plural or singular? DB people tend to like plural. ORMs tend to like singular.You probably want lower_case to avoid quoting.Calculated denormalization can sometimes be useful; copied denormalization is almost never useful.Joins are good.PostgreSQL executes joins very efficiently. Don't be afraid of them.Don't worry about large tables joined with small tables.Use the typing system. It has a rich set of types.Use domains to create custom types.A domain is a core type + a constraint.Don't use polymorphic fields (fields whose interpretation is dependent on another field).Don't use strings to store multiple types.Use constraints. They're cheap and fast.You can create constraints across multiple columns.Avoid Entity-Attribute-Value schemas. They cause great pain. They're very inefficient. They make reports very difficult.Consider using UUIDs instead of serials as synthetic keys.The problem with serials for keys is that merging tables can be hard.Don't have "Thing" tables like "Object" tables.If a table has a few frequently-updated fields and a few slowly-updated fields, consider splitting the table. Split the fast-moving stuff out into a separate 1-to-1 table.Arrays are a first-class type in PostgreSQL. It's a good substitute for using a subsidiary table.A list of tags is a good fit for arrays.He talked about hstore. It's much better than Entity-Attribute-Value. It's great for optional, variable attributes. It's like a hash. It can be indexed, searched, etc. It lets you add attributes to tables for users. Don't use it as a way to avoid all table modifications.json is now a built in type.There's also jsonb.Avoid indexes on big things, like 10k character strings.NULL it a total pain in the neck.Only use it to mean "missing value".Never use it to represent a meaningful value.Let's call anything 1MB or more a "very large object". Store them in files. Store the metadata in the database. The database API is just not a good fit for this.Many-to-many tables can get extremely large. Consider replacing them with array fields (either one way or both directions). You can use a trigger to maintain integrity.You don't want more than about 250k entries in an array.Use UTF-8. Period.Always use TIMESTAMPTZ (which Django uses by default). Don't use TIMESTAMP. TIMESTAMPTZ is a timestamp converted to UTC.Index types: B-Tree Use a B-Tree on a column if you frequently query on that column, use one of the comparison operators, only get back 10-15% of the rows, and run that query frequently. It won't use the index if you're going to get back more than 15% of the rows because it's faster to scan a table then scan an index. Use a partial index if you can ignore most of the rows. The entire tuple has to be copied into the index. GiST It's a framework to create indexes. KNN indexes are the K-nearest neighbors. GIN Generalized inverted index. Used for full-text search. The others either are not good or very specific.Why isn't it using my index? Use explain analyze to look at the query. If it thinks it's going to require most of the rows, it'll do a table scan. If it's wrong, use analyze to update the planner stats. Sometimes, it can't use the index.Two ways to create an index: create index create index concurrentlyreindex rebuilds an index from scratch.pg_stat_user_indexes tells you about how your indexes are being used.What do you do if a query is slow: Use explain or explain analyze. explain doesn't actually run the query."Cost" is measured in arbitrary units. Traditionally, they have been "disk fetches". Costs are inclusive of subnodes.I think explain analyze actually runs the query.Things that are bad: Joins between 2 large tables. Cross joins (cartesian products). These often happen by accident. Sequential scans on large tables. select count(*) is slow because it results in a full table scan since you have to see if the tuples are alive or dead. offset / limit. These actually run the query and then throw away that many rows. Beware that GoogleBot is relentless. Use other keys.If the database is slow: Look at pg_stat_activity: select * from pg_stat_activity; tail -f the logs. Too much I/O? iostat 5.If the database isn't responding: Try connecting with it using psql. pg_stat_activity pg_locksPython Particularspsycopg2 is the only real option in Python 2.The result set of a query is loaded into client memory when the query completes. If there are a ton of rows, you could run out of memory. If you want to scroll through the results, use a "named" cursor. Be sure to dispose of it properly.The Python 3 situation is not so great. There's py-postgresql. It's pure Python.If you are using Django 1.6+, use the @atomic decorator.Cluster all your writes into small transactions. Leave read operations outside.Do all your writes at the very end of the view function.Multi-database works very nicely with hot standby.Point the writes at the primary, and the reads at the secondary.For Django 1.5, use the @xact decorator.Sloppy transaction management can cause the dreaded Django idle-in-transaction problem.Use South for database migration. South is getting merged into Django in version 1.7 of Django.You can use manual migrations for stuff the Django ORM can't specify.Special SituationsUpgrade to 9.3.4. Upgrade minor versions promptly.Major version upgrades require more planning. pg_upgrade has to be run when the database is not running.A full pg_dump / pg_restore is always the safest, although not the most practical.Always read the release notes.All parts of a replication set must be upgraded at once (for major versions).Use copy, not insert, for bulk loading data. psycopg2 has a nice interface. Do a vacuum afterwards.AWSInstances can disappear and come back up without instance storage.EBS can fail to reattach after reboot.PIOPS are useful (but pricey) if you are using EBS.Script everything, instance creation, PostgreSQL, etc. Use Salt. Use a VPC.Scale up and down as required to meet load. If you're just using them to rent a server, it's really expensive.PostgreSQL RDS is a managed database instance. Big plus: automatic failover! Big minus: you can't read from the secondary. It's expensive. It's a good place to start.ShardingEventually, you'll run out of write capacity on your master.postgres-xc is an open source fork of PostgreSQL.Bucardo provides multi-master write capability.He talked about custom sharding.Instagram wrote a nice article about it.PoolingOpening a connection is expensive. Use a pooler.pgbouncer is a pooler.pgPool II can even do query analysis. However, it has higher overhead and is more complex to configure.ToolsMonitor everything.check_postgres.pl is a plugin to monitor PostgreSQL.pgAdmin III and Navicat are nice clients.pgbadger is for log analysis. So is pg_stat_statements.ClosingMVCC works by each tuple having a range of transaction IDs that can see thattuple.Failover is annoying to do in the real world. People use HAProxy, some pooler, etc. with some scripting, or they have a human do the failover.HandyRep is a server-based tool designed to allow you to manage a PostgreSQL "replication cluster", defined as a master and one or more replicas on the same network.
-
PyBites: 3 Cool Things You Can do With the dateutil ModuleFriday, 30 November 2018In this short article I will show you how to use dateutil's parse, relativedelta and rrule to make it easier to work with datetimes in Python. Firt some necessary imports: >>> from datetime import date >>> from dateutil.parser import parse >>> from dateutil.relativedelta import relativedelta >>> from dateutil.rrule import rrule, WEEKLY, WE 1. Parse a datetime from a string This is actually what made me look into dateutil to start with. Camaz shared this technique in the forum for Bite 7. Parsing dates from logs Imagine you have this log line: >>> log_line = 'INFO 2014-07-03T23:27:51 supybot Shutdown complete.' Up until recently I used datetime's strptime like so: >>> date_str = '%Y-%m-%dT%H:%M:%S' >>> datetime.strptime(log_line.split()[1], date_str) datetime.datetime(2014, 7, 3, 23, 27, 51) More string manipulation and you have to know the format string syntax. dateutil's parse takes this complexity away: >>> timestamp = parse(log_line, fuzzy=True) >>> print(timestamp) 2014-07-03 23:27:51 >>> print(type(timestamp)) <class 'datetime.datetime'> 2. Get a timedelta in months A limitation of datetime's timedelta is that it does not show the number of months: >>> today = date.today() >>> pybites_born = date(year=2016, month=12, day=19) >>> (today-pybites_born).days 711 So far so good. However this does not work: >>> (today-pybites_born).years Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'datetime.timedelta' object has no attribute 'years' Nor this: >>> (today-pybites_born).months Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'datetime.timedelta' object has no attribute 'months' relativedelta to the rescue: >>> diff = relativedelta(today, pybites_born) >>> diff.years 1 >>> diff.months 11 When you need months, use relativedelta. And yes, we can almost celebrate two years of PyBites! Another use case of this we saw in my previous article, How to Test Your Django App with Selenium and pytest, where I used it to get the last 3 months for our new Platform Coding Streak feature: >>> def _make_3char_monthname(dt): ... return dt.strftime('%b').upper() ... >>> this_month = _make_3char_monthname(today) >>> last_month = _make_3char_monthname(today-relativedelta(months=+1)) >>> two_months_ago = _make_3char_monthname(today-relativedelta(months=+2)) >>> for month in (this_month, last_month, two_months_ago): ... print(f'{month} {today.year}') ... NOV 2018 OCT 2018 SEP 2018 Let's get next Wednesday for the next example: >>> next_wednesday = today+relativedelta(weekday=WE(+1)) >>> next_wednesday datetime.date(2018, 12, 5) 3. Make a range of dates Say I want to schedule my next batch of Italian lessons, each Wednesday for the coming 10 weeks. Easy: >>> rrule(WEEKLY, count=10, dtstart=next_wednesday) <dateutil.rrule.rrule object at 0x1033ef898> As this will return an iterator and it does not show up vertically, let's materialize it in a list and pass it to pprint: >>> from pprint import pprint as pp >>> pp(list(rrule(WEEKLY, count=10, dtstart=next_wednesday))) [datetime.datetime(2018, 12, 5, 0, 0), datetime.datetime(2018, 12, 12, 0, 0), datetime.datetime(2018, 12, 19, 0, 0), datetime.datetime(2018, 12, 26, 0, 0), datetime.datetime(2019, 1, 2, 0, 0), datetime.datetime(2019, 1, 9, 0, 0), datetime.datetime(2019, 1, 16, 0, 0), datetime.datetime(2019, 1, 23, 0, 0), datetime.datetime(2019, 1, 30, 0, 0), datetime.datetime(2019, 2, 6, 0, 0)] Double-check with Unix cal $ cal 12 2018 December 2018 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 $ cal 1 2019 January 2019 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 $ cal 2 2019 February 2019 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 ... We added an exercise to our platform to create a #100DaysOfCode planning, skipping weekend days. rrule made this relatively easy. And that's it, my favorite use cases of dateutil so far. There is some timezone functionality in dateutil as well, but I have mostly used pytz for that. Learn more? Check out this nice dateutil examples page and feel free to share your favorite snippets in the comments below. Don't forget this is an external library (pip install python-dateutil), for most basic operations datetime would suffice. Another nice stdlib module worth checking out is calendar. Keep Calm and Code in Python! -- Bob
-
Reinout van Rees: Amsterdam Python meetup, november 2018Friday, 30 November 2018My summary of the 28 november python meetup at the Byte office. I myself also gave a talk (about cookiecutter) but I obviously haven't made a summary of that. I'll try to summarize that one later :-) Project Auger - Chris Laffra One of Chris' pet projects is auger, automated unittest generation. He wrote it when lying in bed with a broken ankle and thought about what he hated most: writing tests. Auger? Automated Unittest GEneRator. It works by running a tracer The project's idea is: Write code as always Don't worry about tests Run the auger tracer to record function parameter values and function results. After recording, you can generate mocks and assertions. "But this breaks test driven development"!!! Actually, not quite. It can be useful if you have to start working on an existing code base without any tests: you can generate a basic set of tests to start from. So: it records what you did once and uses that as a starting point for your tests. It makes sure that what ones worked keeps on working. It works with a "context manager". A context manager normally has __enter__() and __exit__(). But you can add more interesting things. If in the __enter__()` you call sys.settrace(self.trace), you can add a def trace(self, frame, event, args), which is then fired upon everything that happens within the context manager. You can use it for coverage tracking or logging or visualization of what happens in your code. He used the last for algorithm visualizations on http://pyalgoviz.appspot.com/ So... this sys.settrace() magic is used to figure out which functions get called with which parameters. Functions and classes in the modules you want to check are tested, classes from other modules are partially mocked. Python LED animation system BiblioPixel - Tom Ritchford Bibliopixel (https://github.com/ManiacalLabs/BiblioPixel) is his pet project. It is a python3 program that runs on basically everything (raspberry pi, linux, osx, windows. What it does? It controls large numbers of lights in real-time without programming. There are lots of output drivers form led strips and philips hue to an opengl in-browser renderer. There are also lots of different ways to steer it. Here is the documentation. He actually started on a lot of programs having to do with audio and lights and so. Starting with a PDP-11 (which only produced beeps). Amiga, macintosch (something that actually worked and was used for real), java, javascript, python + C++. And now python. The long-term goal is to programmatically control lights and other hardware in real time. And... he wants to define the project in text files. The actual light "program" should not be in code. Ideally, bits of projects ought to be reusable. And any input ought to be connectable to any output. Bibliopixel started with the AllPixel LED controller which had a succesful kickstarter campaign (he got involved two years later). An "animation" talks to a "layout" and the layout talks to one or more drivers (one could be a debug visualization on your laptop and the other the real physical installation). Animations can be nested. Above it all is the "Project". A YAML (or json) file that defines the project and configures everything. Bibliopixel is quite forgiving about inputs. It accepts all sorts of colors (red, #ff0000, etc). Capitalization, missing spaces, extraneous spaces: all fine. Likewise about "controls": a control receives a "raw" message and then tries to convert it into something it understands. Bibliopixel is very reliable. Lots of automated tests. Hardware test boards to test the code with the eight most common types of hardware. Solid error handling and readable error messages help a lot. There are some weak points. The biggest is lack of developers. Another problem is that it only supports three colors (RGB). So you can't handle RGBW (RGB plus white) and other such newer combinations. He hopes to move the code over completely to numpy, that would help a lot. Numpy is already supported, but the existing legacy implementation currently also still needs to be work. He showed some nice demos at the end.
-
PyPy Development: Funding for 64-bit Armv8-a support in PyPyThursday, 29 November 2018Hello everyone At PyPy we are trying to support a relatively wide range of platforms. We have PyPy working on OS X, Windows and various flavors of linux (and unofficially various flavors of BSD) on the software side, with hardware side having x86, x86_64, PPC, 32-bit Arm (v7) and even zarch. This is harder than for other projects, since PyPy emits assembler on the fly from the just in time compiler and it requires significant amount of work to port it to a new platform. We are pleased to inform that Arm Limited, together with Crossbar.io GmbH, are sponsoring the development of 64-bit Armv8-a architecture support through Baroque Software OU, which would allow PyPy to run on a new variety of low-power, high-density servers with that architecture. We believe this will be beneficial for the funders, for the PyPy project as well as to the wider community. The work will commence soon and will be done some time early next year with expected speedups either comparable to x86 speedups or, if our current experience with ARM holds, more significant than x86 speedups. Best, Maciej Fijalkowski and the PyPy team
-
Python Engineering at Microsoft: Python in Visual Studio Code – November 2018 ReleaseThursday, 29 November 2018We are pleased to announce that the November 2018 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation. This release was a quality focused release, we have closed a total of 28 issues, improving startup performance and fixing various bugs related to interpreter detection and Jupyter support. Keep on reading to learn more! Improved Python Extension Load Time We have started using webpack to bundle the TypeScript files in the extension for faster load times, this has significantly improved the extension download size, installation time and extension load time. You can see the startup time of the extension by running the Developer: Startup Performance command. Below shows before and after times of extension loading (measured in milliseconds): One downside to this approach is that reporting & troubleshooting issues with the extension is harder as the call stacks output by the Python extension are minified. To address this we have added the Python: Enable source map support for extension debugging command. This command will load source maps for for better error log output. This slows down load time of the extension, so we provide a helpful reminder to disable it every time the extension loads with source maps enabled: These download, install, and startup performance improvements will help you get to writing your Python code faster, and we have even more improvements planned for future releases. Other Changes and Enhancements We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. The full list of improvements is listed in our changelog; some notable changes include: Update Jedi to 0.13.1 and parso 0.3.1. (#2667) Make diagnostic message actionable when opening a workspace with no currently selected Python interpreter. (#2983) Fix problems with virtual environments not matching the loaded python when running cells. (#3294) Make nbconvert in a installation not prevent notebooks from starting. (#3343) Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.
You can add your custom AD message here

-
This RSS feed URL is deprecatedFriday, 30 November 2018This RSS feed URL is deprecated, please update. New URLs can be found in the footers at https://news.google.com/news
-
Hardware-in-the-Loop Testing Meets Wireless System Challenges - ECNmag.comFriday, 30 November 2018ECNmag.comHardware-in-the-Loop Testing Meets Wireless System ChallengesECNmag.comRecent years have brought many advancements for hardware-in-the-loop testing. Gone are the days when a single device could be connected at once and the amount of simulated wireless links couldn't cover the number of antennas found in a modern day ...
-
ASUS ZenBook 15 review: Deserving of a place among the elite - Windows CentralFriday, 30 November 2018Windows CentralASUS ZenBook 15 review: Deserving of a place among the eliteWindows CentralSome of the Taiwanese manufacturers more recent products have been superb, and the latest, the refreshed ZenBook lineup looks to be no exception. The ZenBook 15 is the range topper, packed with the latest hardware, crammed into a svelte, elegant ...and more »
-
Apple's Latest Reinvention: Apple as a Service - 24/7 Wall St.Friday, 30 November 201824/7 Wall St.Apple's Latest Reinvention: Apple as a Service24/7 Wall St.Six months ago we introduced our view of an upcoming investor paradigm shift from Apple as a hardware company to Apple as a Service. We are now in the midst of a four-step transformation process: from news to knee-jerk to indifference to enlightenment.and more »
-
How to Update Your Drivers in Windows | PCMag.com - PCMag.comFriday, 30 November 2018PCMag.comHow to Update Your Drivers in Windows | PCMag.comPCMag.comFind your hardware in the list, right-click on it, and choose Update Driver. Then click Browse My Computer for Driver Software, and navigate to the file you downloaded to install it. Once the driver has been successfully installed, you should have all ...
-
AWS does hybrid cloud with on-prem hardware, VMware help - Network WorldThursday, 29 November 2018Network WorldAWS does hybrid cloud with on-prem hardware, VMware helpNetwork WorldOutposts can be upgraded with the latest hardware and next-generation instances to run all native AWS and VMware applications, AWS stated. A second version VMware Cloud on AWS Outposts lets customers use the a VMware control plane and APIs to run ...Amazon Outpost brings cloud technology to traditional data centersCNBCAWS Launches On-Premises Hardware Alongside VMwareCRNBuy Or Build An Autonomous Race Car To Take The Checkered FlagHackadayall 658 news articles »
-
Intel Adds Support for Universal Windows Drivers With Latest Graphics Release - ExtremeTechThursday, 29 November 2018ExtremeTechIntel Adds Support for Universal Windows Drivers With Latest Graphics ReleaseExtremeTechBecause a base driver can be used across all systems that share a hardware part, Microsoft can test the base driver broadly via Windows Insider flighting, rather than limiting distribution to specific machines. The OEM validates only the optional ...Intel Commits to Microsoft's Universal Windows Platform for All Future Driver ReleasesNDTVIntel launches Windows Modern Drivers for Windows 10WindowsLatest (blog)Intel Introduces First Universal Windows Driver for GraphicsTom's Hardwareall 22 news articles »
You can add your custom AD message here

-
The best Windows tablets 2018: all of the top Windows tablets reviewedFriday, 30 November 2018Brace yourself for the best Windows tablets to take on the go for 2018.
-
Best touchscreen laptops 2018: the best touchsceen laptops we've tapped this yearFriday, 30 November 2018Phablets might be the biggest craze, but if you want to do more with your touchscreen, then a laptop is the smart buy and we've rounded up the best.
-
Best Mac 2018: the best Macs to buy this yearFriday, 30 November 2018We weigh the pros and cons of Apple's best Mac desktops and laptops, from Mac mini to the MacBook Pro.
-
The best rugged laptops of 2018: we test drop-proof laptops for working outsideFriday, 30 November 2018We list the best rugged laptops in 2018 that give you extra protection from environmental hazards.
-
Best laptops for kids: the top laptops for kids in elementary school and beyondFriday, 30 November 2018Find the best laptops for children of all ages, from their first to their last day of school and everything in between.
-
The best portable laptop battery chargers and power banks in 2018Friday, 30 November 2018We hunt out the perfect notebook power bank so you don’t have to.
-
The best laptop for writers: the 10 best laptops for authors and journalistsFriday, 30 November 2018We've gathered together the best laptops money can buy for writers and journalists.
You can add your custom AD message here
- You are here:
-
Home
- Categories