Feeds
Planet Python

Doug Hellmann: entrypointinspector 0.2.1Monday, 12 April 2021What’s new in 0.2.1? update python support versions to drop python 2 support and include 3.7, 3.8, and 3.9 Fix packaging issue by requiring setuptools for installation (contributions by Miro Hrončok) fix linter issue in setup.py add github actions to run CI jobs

John Cook: Spaceship operator in PythonMonday, 12 April 2021Some programming languages, such as Perl, have an infix operator <=> that returns a threestate comparison. The expression a <=> b evaluates to 1, 0, or 1 depending on whether a < b, a = b, or a > b. You could think of <=> as a concatenation of <, =, and >. The <=> operator is often called the “spaceship operator” because it looks like Darth Vader’s ship in Star Wars. Python doesn’t have a spaceship operator, but you can get the same effect with numpy.sign(ab). For example, suppose you wanted to write a program to compare two integers. You could write from numpy import sign def compare(x, y): cmp = ["equal to", "greater than", "less than"][sign(xy)] print(f"{x} is {cmp} {y}.") Here we take advantage of the fact that an index of 1 points to the last element of a list. The sign function will return an integer if its argument is an integer or a float if its argument is a float. The code above will break if you pass in floating point numbers because sign will return 1.0, 0.0, or 1.0. But if you replace sign(xy) with int(sign(xy)) it will work for floating point arguments. Related post: Symbol pronunciation The post Spaceship operator in Python first appeared on John D. Cook.

death and gravity: Looking to improve by reading code? Some great examples from the Python standard libraryMonday, 12 April 2021So, you're an advanced beginner – you've learned your way past Python basics and can solve real problems. You've now moved past tutorials and blog posts; maybe you feel they offer onedimensional solutions to simple, madeup problems; maybe instead of solving this specific problem, you want to get better at solving problems in general. Maybe you heard you should develop an eye by reading and writing a lot of code. It's true. So, what code should you read? "Just read what you like." What if you don't know what you like? What if you don't like the right thing? Or worse, what if you like the wrong thing, and get stuck with bad habits because of it? After all, you have to have an eye for that... ...but that's what you're trying to develop in the first place. "There are so many projects on GitHub – pick one you like and see how they did it." But most successful projects are quite large; where do you start from? And even if you knew where to start, how they did it isn't always obvious. Yes, the code is right there, but it doesn't really tell you why they did it, what they didn't do, nor how they thought about the whole thing. In other words, it is not obvious from the code itself what the design philosophy was and what choices were considered before settling on an implementation. In this article, we'll look at some standard library modules where it is. A note about the standard library # As a whole, the Python standard library isn't great for learning "good" style. While all the modules are useful, they're not very uniform: they have different authors; some of them are old (pythonic was different 1020 years ago); and they have to preserve backwards compatibility (refactoring risks introducing bugs, and major API changes are out of the question). On the other hand, the newer modules are more consistent, have detailed PEPs explaining the design tradeoffs, and some took inspiration from already mature third party libraries. It's a few of the latter ones we'll look at. Style aside, there's a lot to learn from the standard library, since it solves real problems for a diverse population of developers. It's interesting/educative to look at the differences between stdlib stuff and newer external alternatives – the shows a perceived deficiency in the standard library (otherwise they wouldn't have bothered with the new thing); an example of this is urllib vs. requests. How to read these # Roughly in this order: Get familiar with them as a user: read the documentation, maybe play with the examples a bit. Read the corresponding Python Enhancement Proposal (PEP). The interesting sections usually are the Abstract, Rationale, Design Decisions, Discussion, and Rejected Ideas. Read the code; it's linked at the top of each documentation page. dataclasses # The dataclasses module reduces the boilerplate of writing classes by generating special methods like __init__ and __repr__. (See this tutorial for an introduction that has more concrete examples than the official documentation.) It was introduced in PEP 557, as a simpler version of attrs. The Specification section is similar to the documentation; the good stuff is in Rationale, Discussion, and Rejected Ideas. The code is extremely well commented; particularly interesting is this use of decision tables (ASCII version, nested if version). It is also a good example of metaprogramming. Raymond Hettinger's Dataclasses: The code generator to end all code generators talk looks at dataclasses with a focus on the code generation aspects (HTML slides, PDF slides). pathlib # The pathlib module provides a simple hierarchy of classes to handle filesystem paths; it is a higher level alternative to os.path. It was introduced in PEP 428. Most of the examples serve to illustrate the underlying philosophy, with the code left as specification. The code is a good read for a few reasons: You're likely already familiar with the subject matter; even if you didn't use it before, you may have used os.path, or a similar library in some other language. It is a good objectoriented solution. It uses object oriented programming with abstract (read: invented) concepts to achieve better code structure and reuse. It's probably a much better example than the traditional Animal–Dog–Cat–Duck. It is a good comparative study subject: both pathlib and os.path offer the same functionality with vastly different programming styles. Also, there was another proposal all the way back in 2006 that was rejected, and there are at least five other object oriented filesystem path libraries out there. pathlib learns from all of them. statistics # The statistics module adds statistical functions to the standard library; it's not intended to be a competitor libraries like NumPy, but is rather "aimed at the level of graphing and scientific calculators". It was introduced in PEP 450. Even if you are not familiar with the subject matter, it is a very interesting read: The Rationale section compares the proposal with NumPy or doityourself solutions; it's particularly good at showing what and why something is added to the standard library. There's also a Design Decision section, which makes explicit what the general design philosophy was. The Discussion and FAQ sections also have some interesting details. The documentation is also very nice. This is by design; as the proposal says: "Plenty of documentation, aimed at readers who understand the basic concepts but may not know (for example) which variance they should use [...] But avoid going into tedious mathematical detail." The code is relatively simple, and when it's not, there are comments and links to detailed explanations or papers. You may find this useful if you're just learning about this stuff and find it easier to read code than maths notation. Bonus: graphlib # graphlib was added in Python 3.9, and at the moment contains just one thing: an implementation of a topological sort algorithm (here's a refresher on what it is and how it's useful). This doesn't come with a PEP; it does however have an issue with lots of comments from various core developers, including Raymond Hettinger and Tim Peters (of Zen of Python fame). Since this is essentially a solved problem, most of the discussion focuses on the API instead: where to put it, what to call it, how to represent the input and the output, how to make it easy to use and flexible at the same time. One thing they're trying to do is reconcile two diferent use cases: Here's a graph, give me all the nodes in topological order. Here's a graph, give me the nodes that can be processed right now (either because they don't have dependencies, or because their dependencies have already been processed). This is useful to parallelize work, for example downloading and installing packages that depend on other packages. Unlike with PEPs, you can see the solution evolving as you read. Most enhancement proposals summarize the main other choices as well, but if you don't follow the mailing list links it's easy to get the impression they just appear, fully formed. Compared to the discussion, the code itself is tiny – just under 250 lines, mostly comments and documentation. That's it for now. If you found this useful, please consider sharing it on Reddit or anywhere else :)

Real Python: Start Contributing to Python: Your First StepsMonday, 12 April 2021If you want to start contributing to open source, then Python is a great project to start with. You’ll not only be making your mark on one of the biggest projects out there, but you’ll also be doing it as part of a vibrant and welcoming community. Open source projects rely on contributions from volunteers like you to grow and evolve, so you’ll be making a real difference to the future of open source software. On top of that, contributing to open source is a great way to learn and build your skills, so don’t worry if you don’t feel like an expert. There may be a way to contribute that’s perfect for you, even if you don’t know about it yet. It all starts with your first contribution! By the end of this tutorial, you’ll know: How you can contribute in a way that matches your skills and interests What resources and tools you can use to help you contribute confidently Where you can find ideas for fixes to propose in your first contribution Free Download: Get a sample chapter from CPython Internals: Your Guide to the Python 3 Interpreter showing you how to unlock the inner workings of the Python language, compile the Python interpreter from source code, and participate in the development of CPython. How You Can Contribute Depending on your interests and skills, you can contribute in a number of different ways. For example, if you want to contribute to CPython, you can: Fix code bugs Write unit tests for functions in the standard library Write documentation for functions in the standard library But if you want to contribute in other areas, you can: Write documentation for the Python Developer’s Guide Translate documentation Use your front end skills to improve Python’s official site You can also help review pull requests from other contributors. The core developers have a lot of work on their hands, so if you can help move some issues forward, then you’ll be helping Python get better faster. How to Get the Resources You’ll Need When you start contributing to an open source project, there can be a lot of information to take in all at once. Read the full article at https://realpython.com/startcontributingpython/ » [ Improve Your Python With ? Python Tricks ? – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Cusy: Python Pattern Matching in Linux Magazine 05/2021Monday, 12 April 2021Linux Magazin 05/2021 The originally objectoriented programming language Python is to receive a new feature in version 3.10, which is mainly known from functional languages: pattern matching. The change is controversial in the Python community and has triggered a heated debate. Pattern matching is a symbolprocessing method that uses a pattern to identify discrete structures or subsets, e.g. strings, trees or graphs. This procedure is found in functional or logical programming languages where a match expression is used to process data based on its structure, e.g. in Scala, Rust and F#. A match statement takes an expression and compares it to successive patterns specified as one or more cases. This is superficially similar to a switch statement in C, Java or JavaScript, but much more powerful. Python 3.10 is now also to receive such a match expression. The implementation is described in PEP (Python Enhancement Proposal) 634. [1] Further information on the plans can be found in PEP 635 [2] and PEP 636 [3]. How pattern matching is supposed to work in Python 3.10 is shown by this very simple example, where a value is compared with several literals: def http_error(status): match status: case 400: return "Bad request" case 401: return "Unauthorized" case 403: return "Forbidden" case 404: return "Not found" case 418: return "I'm a teapot" case _: return "Something else" In the last case of the match statement, an underscore _ acts as a placeholder that intercepts everything. This has caused irritation among developers because an underscore is usually used in Python before variable names to declare them for internal use. While Python does not distinguish between private and public variables as strictly as Java does, it is still a very widely used convention that is also specified in the Style Guide for Python Code [4]. However, the proposed match statement can not only check patterns, i.e. detect a match between the value of a variable and a given pattern, it also rebinds the variables that match the given pattern. This leads to the fact that in Python we suddenly have to deal with Schrödinger constants, which only remain constant until we take a closer look at them in a match statement. The following example is intended to explain this: NOT_FOUND = 404 retcode = 200 match retcode: case NOT_FOUND: print('not found') print(f"Current value of {NOT_FOUND=}") This results in the following output: not found Current value of NOT_FOUND=200 This behaviour leads to harsh criticism of the proposal from experienced Python developers such as Brandon Rhodes, author of «Foundations of Python Network Programming»: If this poorlydesigned feature is really added to Python, we lose a principle I’ve always taught students: “if you see an undocumented constant, you can always name it without changing the code’s meaning.” The Substitution Principle, learned in algebra? It’ll no longer apply. — Brandon Rhodes on 12 February 2021, 2:55 pm on Twitter [5] Many longtime Python developers, however, are not only grumbling about the structural patternmatching that is to come in Python 3.10. They generally regret developments in recent years in which more and more syntactic sugar has been sprinkled over the language. Original principles, as laid down in the Zen of Python [6], would be forgotten and functional stability would be lost. Although Python has defined a sophisticated process with the Python Enhancement Proposals (PEPs) [7] that can be used to collaboratively steer the further development of Python, there is always criticism on Twitter and other social media, as is the case now with structural pattern matching. In fact, the topic has already been discussed intensively in the Python community. The Python Steering Council [8] recommended adoption of the Proposals as early as December 2020. Nevertheless, the topic only really boiled up with the adoption of the Proposals. The reason for this is surely the size and diversity of the Python community. Most programmers are probably only interested in discussions about extensions that solve their own problems. The other developments are overlooked until the PEPs are accepted. This is probably the case with structural pattern matching. It opens up solutions to problems that were hardly possible in Python before. For example, it allows data scientists to write matching parsers and compilers for which they previously had to resort to functional or logical programming languages. With the adoption of the PEP, the discussion has now been taken into the wider Python community. Incidentally, Brett Cannon, a member of the Python Steering Council, pointed out in an interview [9] that the last word has not yet been spoken: until the first beta version, there is still time for changes if problems arise in practically used code. He also held out the possibility of changing the meaning of _ once again. So maybe we will be spared Schrödinger’s constants. [1]PEP 634: Specification [2]PEP 635: Motivation and Rationale [3]PEP 636: Tutorial [4]https://pep8.org/#descriptivenamingstyles [5]@brandon_rhodes [6]PEP 20 – The Zen of Python [7]Index of Python Enhancement Proposals (PEPs) [8]Python Steering Council [9]Python Bytes Episode #221

Stack Abuse: Borůvka's Algorithm in Python  Theory and ImplementationMonday, 12 April 2021Introduction Borůvka's Algorithm is a greedy algorithm published by Otakar Borůvka, a Czech mathematician best known for his work in graph theory. Its most famous application helps us find the minimum spanning tree in a graph. A thing worth noting about this algorithm is that it's the oldest minimum spanning tree algorithm, on record. Borůvka came up with it in 1926, before computers as we know them today even existed. It was published as a method of constructing an efficient electricity network. In this guide, we'll take a refresher on graphs, and what minimum spanning trees are, and then jump into Borůvka's algorithm and implement it in Python: Graphs and Minimum Spanning Trees Borůvka's Algorithm Implementation Graphs and Minimum Spanning Trees A graph is a abstract structure that represents a group of certain objects called nodes (also known as vertices), in which certain pairs of those nodes are connected or related. Each one of these connections is called an edge. A tree is an example of a graph: In the image above, the first graphs has 4 nodes and 4 edges, while the second graph (a binary tree) has 7 nodes and 6 edges. Graphs can be applied to many problems, from geospatial locations to social network graphs and neural networks. Conceptually, graphs like these are all around us. For example, say we'd like to plot a family tree, or explain to someone how we met our significant other. We might introduce a large number of people and their relationships to make the story as interesting to the listener as it was to us. Since this is really just a graph of people (nodes) and their relationships (edges)  graphs are a great way to visualize this: Types of Graphs Depending on the types of edges a graph has, we have two distinct categories of graphs: Undirected graphs Directed graphs An undirected graph is a graph in which the edges do not have orientations. All edges in an undirected graph are, therefore, considered bidirectional. Formally, we can define an undirected graph as G = (V, E) where V is the set of all the graph's nodes, and E is a set that contains unordered pairs of elements from E, which represent edges. Unordered pairs here means that the relationship between two nodes is always twosided, so if we know there's an edge that goes from A to B, we know for sure that there's an edge that goes from B to A. A directed graph is a graph in which the edges have orientations. Formally, we can define a directed graph as G = (V, E) where V s the set of all the graph's nodes, and E is a set that contains ordered pairs of elements from E. Ordered pairs imply that the relationship between two nodes can be either one or twosided. Meaning that if there's an edge that goes from A to B, we can't know if there's an edge that goes from B to A. The direction of an edge is denoted with an arrow. Keep in mind that twosided relationships can be shown either by drawing two distinct arrows or just drawing two arrow points on either side of the same edge: Another way to differentiate graphs based on their edges is regarding the weight of those edges. Based on that, a graph can be: Weighted Unweighted A weighted graph is a graph in which every edge is assigned a number  its weight. These weights can represent the distance between nodes, capacity, price et cetera, depending on the problem we're solving. Weighted graphs are used pretty often, for example in problems where we need to find the shortest or, as we will soon see, in problems in which we have to find a minimum spanning tree. An unweighted graph does not have weights on its edges. Note: In this article, we will focus on undirected, weighted graphs. A graph can also be connected and disconnected. A graph is connected if there is a path (which consists of one or more edges) between each pair of nodes. On the other hand, a graphs is disconnected if there is a pair of nodes which can't aren't connected by a path of edges. Trees and Minimum Spanning Trees There's a fair bit to be said about trees, subgraphs and spanning trees, though here's a really quick and concise breakdown: A tree is an undirected graph where each two nodes have exactly one path connecting them, no more, no less. A subgraph of a graph A is a graph that is compromised of a subset of graph A's nodes and edges. A spanning tree of graph A is a subgraph of graph A that is a tree, whose set of nodes is the same as graph A's. A minimum spanning tree is a spanning tree, such that the sum of all the weights of the nodes is the smallest possible. Since it's a tree (and the edge weight sum should be minimal), there shouldn't be any cycles. Note: In case all edge weights in a graph are distinct, the minimum spanning tree of that graph is going to be unique. However, if the edge weights are not distinct, there can be multiple minimum spanning trees for only one graph. Now that we're covered in terms of graph theory, we can tackle the algorithm itself. Borůvka's Algorithm The idea behind this algorithm is pretty simple and intuitive. We mentioned before that this was a greedy algorithm. When an algorithm is greedy, it constructs a globally "optimal" solution using smaller, locally optimal solutions for smaller subproblems. Usually, it converges with a goodenough solution, since following local optimums doesn't guarantee a globally optimum solution. Simply put, greedy algorithms make the optimal choice (out of currently known choices) at each step of the problem, aiming to get to the overall most optimal solution when all of the smaller steps add up. You could think of greedy algorithms as a musician who's improvising at a concert and will in every moment play what sounds the best. On the other hand, nongreedy algorithms are more like a composer, who'll think about the piece they're about to perform, and take their time to write it out as sheet music. Now, we will break down the algorithm in a couple of steps: We initialize all nodes as individual components. We initialize the minimum spanning tree S as an empty set that'll contain the solution. If there is more than one component: Find the minimumweight edge that connects this component to any other component. If this edge isn't in the minimum spanning tree S, we add it. If there is only one component left, we have reached the end of the tree. This algorithm takes a connected, weighted and undirected graph as an input, and its output is the graph's corresponding minimum spanning tree. Let's take a look at the following graph and find its minimum spanning tree using Borůvka's algorithm: At the start, every represents an individual component. That means that we will have 9 components. Let's see what the smallest weight edges that connect these components to any other component would be: Component Smallest weight edge that connects it to some other component Weight of the edge {0} 0  1 4 {1} 0  1 4 {2} 2  4 2 {3} 3  5 5 {4} 4  7 1 {5} 3  5 10 {6} 6  7 1 {7} 4  7 1 {8} 7  8 3 Now, our graph is going to be in this state: The green edges in this graph represent the edges that bind together its closest components. As we can see, now we have three components: {0, 1}, {2, 4, 6, 7, 8} and {3, 5}. We repeat the algorithm and try to find the minimumweight edges that can bind together these components: Component Smallest weight edge that connects it to some other component Weight of the edge {0, 1} 0  6 7 {2, 4, 6, 7, 8} 2  3 6 {3, 5} 2  3 6 Now, our graph is going to be in this state: As we can see, we are left with only one component in this graph, which represents our minimum spanning tree! The weight of this tree is 29, which we got after summing all of the edges: Now, the only thing left to do is implement this algorithm in Python. Implementation We are going to implement a Graph class, which will be the main data structure we'll be working with. Let's start off with the constructor: class Graph: def __init__(self, num_of_nodes): self.m_v = num_of_nodes self.m_edges = [] self.m_component = {} In this constructor, we provided the number of nodes in the graph as an argument, and we initialized three fields: m_v  the number of nodes in the graph. m_edges  the list of edges. m_component  the dictionary which stores the index of the component which a node belongs to. Now, let's make a helper function that we can use to add an edge to a graph's nodes: def add_edge(self, u, v, weight): self.m_edges.append([u, v, weight]) This function is going to add an edge in the format [first, second, edge weight] to our graph. Because we want to ultimately make a method that unifies two components, we'll first need a method that propagates a new component throughout a given component. And secondly, we'll need a method that finds the component index of a given node: def find_component(self, u): if self.m_component[u] == u: return u return self.find_component(self.m_component[u]) def set_component(self, u): if self.m_component[u] == u: return else: for k in self.m_component.keys(): self.m_component[k] = self.find_component(k) In this method, we will artificially treat the dictionary as a tree. We ask whether or not we've found the root of our component (because only root components will always point to themselves in the m_component dictionary). If we haven't found the root node, we recursively search the current node's parent. Note: The reason we don't assume that m_components points to the correct component is because when we start unifying components, the only thing that we know for sure won't change its component index is the root components. For example, in our graph in the example above, in the first iteration, the dictionary is going to look like this: index value 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 We've got 9 components, and each member is the component by itself. In the second iteration, it's going to look like this: index value 0 0 1 0 2 2 3 3 4 2 5 3 6 7 7 4 8 7 Now, tracing back to the roots, we'll see that our new components will be: {0, 1}, {2, 4, 7, 6, 8} and {3, 5}. The last method we're going to need before implementing the algorithm itself is the method that unifies two components into one, given two nodes which belong to their respective components: def union(self, component_size, u, v): if component_size[u] <= component_size[v]: self.m_component[u] = v component_size[v] += component_size[u] self.set_component(u) elif component_size[u] >= component_size[v]: self.m_component[v] = self.find_component(u) component_size[u] += component_size[v] self.set_component(v) print(self.m_component) In this function, we find the roots of components for two nodes (which are their component indexes at the same time). Then, we compare the components in terms of size, and attached the smaller one to the larger one. Then, we just add the size of the smaller one to the size of the larger one, because they are now one component. Finally, if the components are of same size, we just unite them together however we want  in this particular example we did it by adding the second one to the first one. Now that we've implemented all the utility methods we need, we can finally dive into Borůvka's algorithm: def boruvka(self): component_size = [] mst_weight = 0 minimum_weight_edge = [1] * self.m_v for node in range(self.m_v): self.m_component.update({node: node}) component_size.append(1) num_of_components = self.m_v print("Forming MST") while num_of_components > 1: for i in range(len(self.m_edges)): u = self.m_edges[i][0] v = self.m_edges[i][1] w = self.m_edges[i][2] u_component = self.m_component[u] v_component = self.m_component[v] if u_component != v_component: if minimum_weight_edge[u_component] == 1 or \ minimum_weight_edge[u_component][2] > w: minimum_weight_edge[u_component] = [u, v, w] if minimum_weight_edge[v_component] == 1 or \ minimum_weight_edge[v_component][2] > w: minimum_weight_edge[v_component] = [u, v, w] for node in range(self.m_v): if minimum_weight_edge[node] != 1: u = minimum_weight_edge[node][0] v = minimum_weight_edge[node][1] w = minimum_weight_edge[node][2] u_component = self.m_component[u] v_component = self.m_component[v] if u_component != v_component: mst_weight += w self.union(component_size, u_component, v_component) print("Added edge [" + str(u) + "  " + str(v) + "]\n" + "Added weight: " + str(w) + "\n") num_of_components = 1 minimum_weight_edge = [1] * self.m_v print("") print("The total weight of the minimal spanning tree is: " + str(mst_weight)) The first thing we did in this algorithm was initialize additional lists we would need in the algorithm: A list of components (initialized to all of the nodes). A list that keeps their size (initialized to 1), as well as the list of the minimum weight edges (1 at first, since we don't know what the minimum weight edges are yet). Then, we go through all of the edges in the graph, and we find the root of components on both sides of those edges. After that, we are looking for the minimum weight edge that connects these two components using a couple of if clauses: If the current minimum weight edge of component u doesn't exist (is 1), or if it's greater than the edge we're observing right now, we will assign the value of the edge we're observing to it. If the current minimum weight edge of component v doesn't exist (is 1), or if it's greater than the edge we're observing right now, we will assign the value of the edge we're observing to it. After we've found the cheapest edges for each component, we add them to the minimum spanning tree, and decrease the number of components accordingly. Finally, we reset the list of minimum weight edges back to 1, so that we can do all of this again. We keep iterating as long as there are more than one component in the list of components. Let's put the graph we used in the example above as the input of our implemented algorithm: g = Graph(9) g.add_edge(0, 1, 4) g.add_edge(0, 6, 7) g.add_edge(1, 6, 11) g.add_edge(1, 7, 20) g.add_edge(1, 2, 9) g.add_edge(2, 3, 6) g.add_edge(2, 4, 2) g.add_edge(3, 4, 10) g.add_edge(3, 5, 5) g.add_edge(4, 5, 15) g.add_edge(4, 7, 1) g.add_edge(4, 8, 5) g.add_edge(5, 8, 12) Chucking it in the algorithm's implementation will result in: Forming MST {0: 1, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8} Added edge [0  1] Added weight: 4 {0: 1, 1: 1, 2: 4, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8} Added edge [2  4] Added weight: 2 {0: 1, 1: 1, 2: 4, 3: 5, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8} Added edge [3  5] Added weight: 5 {0: 1, 1: 1, 2: 4, 3: 5, 4: 4, 5: 5, 6: 6, 7: 4, 8: 8} Added edge [4  7] Added weight: 1 {0: 1, 1: 1, 2: 4, 3: 5, 4: 4, 5: 5, 6: 4, 7: 4, 8: 8} Added edge [6  7] Added weight: 1 {0: 1, 1: 1, 2: 4, 3: 5, 4: 4, 5: 5, 6: 4, 7: 4, 8: 4} Added edge [7  8] Added weight: 3 {0: 4, 1: 4, 2: 4, 3: 5, 4: 4, 5: 5, 6: 4, 7: 4, 8: 4} Added edge [0  6] Added weight: 7 {0: 4, 1: 4, 2: 4, 3: 4, 4: 4, 5: 4, 6: 4, 7: 4, 8: 4} Added edge [2  3] Added weight: 6  The total weight of the minimal spanning tree is: 29 The time complexity of this algorithm is O(ElogV), where E represents the number of edges, while V represents the number of nodes. The space complexity of this algorithm is O(V + E), since we have to keep a couple of lists whose sizes are equal to the number of nodes, as well as keep all the edges of a graph inside of the data structure itself. Conclusion Even though Borůvka's algorithm is not as well known as some other minimum spanning tree algorithms like Prim's or Kruskal's minimum spanning tree algorithms, it gives us pretty much the same result  they all find the minimum spanning tree, and the time complexity is approximately the same. One advantage that Borůvka's algorithm has compared to the alternatives is that it doesn't need to presort the edges or maintain a priority queue in order to find the minimum spanning tree. Even though that doesn't help its complexity, since it still passes the edges logE times, it is a bit more simple to code.

Zato Blog: Understanding WebSocket API timeoutsMonday, 12 April 2021Zato WebSocket channels let you accept longrunning API connections and, as such, they have a few settings to fine tune their usage of timeouts. Let's discover what they are and how to use them. WebSocket channels The four timeout settings are listed below. All of the WebSocket clients using a particular channel will use the same timeouts configuration  this means that a different channel is needed if particular clients require different settings. New token wait time Token TTL Ping interval Threshold Tokens New token wait time  when a new WebSocket connection is established to Zato, it has that many seconds to open a session and to send its credentials. If that is not done, Zato immediately closes the connection. Token TTL  once a session is established and a session token is returned to the client, the token's timetolive (TTL) will be that many seconds. If there is no message from the client within TTL seconds, Zato considers the token expired and it cannot be used any longer although it is not guaranteed that the connection will be closed immediately after the token becomes expired. In this context, a message that can extend TTL means one of: A request sent by the client A response to a request previously sent by Zato A response to a ping message sent by Zato Ping messages Ping interval  Zato sends WebSocket ping messages once in that many seconds. Each time a response to a ping request is received from the client, the session token's TTL is extended by the same number of seconds. For instance, supposing a new session token was issued to a client at 15:00:00 with a TTL of 3600 (to 16:00:00) and ping inteval is 30 seconds. First, at 15:00:30 Zato will send a ping message. If the client responds successfully, the token's TTL will be increased by ping interval seconds more (here, 30) from the time the response arrived, e.g. if it arrives at 15:00:30,789 (after 789 milliseconds), it will be valid up to 16:00:30,789 because this is the result of adding TTL and ping interval seconds from the time the response was received by the server. Threshold  the threshold of missed ping messages after exceeding of which Zato will close the connection. For instance, if the threshold is 5 and ping interval is 10, Zato will ping the client once in 10 seconds, if there are no 5 responses to the pings in a row (a total of 50 seconds in this case), the connection will be closed immediately. Note that only pings missed consecutively are counted towards the threshold. For instance, if a client missed 2 out of 5 pings but then replies on the 3rd attempt, its counter of messages missed is reset and it starts from 0 once more as though it never missed a single ping. A note about firewalls A great advantage of using WebSocket connections is that they are bidirectional and let one easily send messages to and from clients using the same TCP connection over a longer time. However, particularly in the relation to ping messages, it needs to be remembered that stateful firewalls in data centers may have their requirements as to how often peers should communicate. This is especially true if the communication is over the Internet rather than in the same data center. On one hand, this means that the ping interval should be set to a value small enough to ensure that firewalls will not break connections in a belief that Zato does not have anything more to send. Yet, it should not be too small lest, with a huge number of connections, the overhead of pings becomes too burdensome. For instance, pinging each client once a second is almost certainly too much and usually 2040 seconds are a much better choice. On the other hand, firewalls may also require the side which initiated the TCP connection (i.e. the WebSocket client) to periodically send some data to keep the connection active, otherwise the firewalls will drop the connection. This means that clients should be also possibly configured to send ping messages and how often they should do it may depend on what the applicable firewalls expect  otherwise, with only Zato pinging the client, it may not be enough for firewalls to understand that a connection is still active. Python code Finally, it is worth to keep in mind that all the timeouts, TTLs and pings are managed by the platform automatically and there is no programming needed for them to work. For instance, the service below, once assigned to a WebSocket channel, will focus on the business functionality rather than on lowlevel management of timeouts  in other words, there is no additional code required. # * coding: utf8 * # Zato from zato.server.service import Service class MyService(Service): def handle(self): self.logger.info('My request is %s', self.request.input) Next steps Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multiprotocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers. Visit the support page if you would like to discuss anything about Zato with its creators Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí
Login on frontend as
Login on backend as
Test other products
from thePHPfactory
from thePHPfactory
 Rss Factory PROWhy does the processor experiment? Creatures walk with mind! Spacecrafts are the creatures of the greatly exaggerated coordinates.
 Love FactoryMetamorphosis, rumour, and advice. Where is the brave space suit?
 Advertisement FactoryShield at the alpha quadrant was the courage of energy, invaded to a small parasite.
 Auction FactoryCore at the galaxy that is when calm pathways warp?
 Chat FactoryThis advice has only been observed by a boldly creature?
 Blog FactoryMetamorphosis at the homeworld was the core of vision, accelerated to a colorful parasite.
 Raffle FactoryTransformators are the nanomachines of the apocalyptic collision course.
 You are here:
 Home
 Feeds