10 interesting stories served every morning and every evening.
There’s a tried-and-true architecture that I’ve seen many times for supporting your web services and applications:
Redis for coordinating background job queues (and some limited atomic operations)
Redis is fantastic, but what if I told you that its most common use cases for this stack could actually be achieved using only PostgreSQL?
Perhaps the most common use of Redis I’ve seen is to coordinate dispatching of jobs from your web service to a pool of background workers. The concept is that you’d like to record the desire for some background job to be performed (perhaps with some input data) and to ensure that only one of your many background workers will pick it up. Redis helps with this because it provides a rich set of atomic operations for its data structures.
But since the introduction of version 9.5, PostgreSQL has a SKIP LOCKED option for the SELECT … FOR … statement (here’s the documentation). When this option is speciﬁed, PostgreSQL will just ignore any rows that would require waiting for a lock to be released.
Consider this example from the perspective of a background worker:
WITH job AS ( SELECT id FROM jobs WHERE status = ‘pending’ LIMIT 1 FOR UPDATE SKIP LOCKED )
UPDATE jobs SET status = ‘running’ WHERE jobs.id = job.id RETURNING jobs.*;
By specifying FOR UPDATE SKIP LOCKED, a row-level lock is implicitly acquired for any rows returned from the SELECT. Further, because you speciﬁed SKIP LOCKED, there’s no chance of this statement blocking on another transaction. If there’s another job ready to be processed, it will be returned. There’s no concern about multiple workers running this command receiving the same row because of the row-level lock.
The biggest caveat for this technique is that, if you have a large number of workers trying to pull off this queue and a large number of jobs feeding them, they may spend some time stepping through jobs and trying to acquire a lock. In practice, most of the apps I’ve worked on have fewer than a dozen background workers, and the cost is not likely to be significant.
Let’s imagine that you have a synchronization routine with a third-party service, and you only want one instance of it running for any given user across all server processes. This is another common application I’ve seen for Redis: distributed locking.
PostgreSQL can achieve this as well using its advisory locks. Advisory locks allow you to leverage the same locking engine PostgreSQL uses internally for your own application-deﬁned purposes.
I saved the coolest example for last: pushing events to your active clients. For example, say you need to notify a user that they have a new message available to read. Or perhaps you’d like to stream data to the client as it becomes available. Typically, web sockets are the transport layer for these events while Redis serves as the Pub/Sub engine.
However, since version 9, PostgreSQL also provides this functionality via the LISTEN and NOTIFY statements. Any PostgreSQL client can subscribe (LISTEN) to a particular message channel, which is just an arbitrary string. When any other client sends a message (NOTIFY) on that channel, all other subscribed clients will be notiﬁed. Optionally, a small message can be attached.
If you happen to be using Rails and ActionCable, using PostgreSQL is even supported out of the box.
Redis fundamentally ﬁlls a different niche than PostgreSQL and excels at things PostgreSQL doesn’t aspire to. Examples include caching data with TTLs and storing and manipulating ephemeral data.
However, PostgreSQL has a lot more capabilities than you may expect when you approach it from the perspective of just another SQL database or some mysterious entity that lives behind your ORM.
There’s a good chance that the things you’re using Redis for may actually be good tasks for PostgreSQL as well. It may be a worthy tradeoff to skip Redis and save on the operational costs and development complexity of relying on multiple data services.
We come from the futureWe come from the future
How Fighter Jets Lock On (and How the Targets Know)
The primary technology that a military aircraft uses to lock and track an enemy aircraft is its onboard radar. Aircraft radars typically have two modes: search and track. In search mode, the radar sweeps a radio beam across the sky in a zig-zag pattern. When the radio beam is reﬂected by a target aircraft, an indication is shown on the radar display. In search mode, no single aircraft is being tracked, but the pilot can usually tell generally what a particular radar return is doing because with each successive sweep, the radar return moves slightly. This is an example of the ﬁre control radar display for an F-16 Fighting Falcon when the radar is in a search mode:Each white brick is a radar return. Because the radar is only scanning, not tracking, no other information is available about the radar targets. (There is one exception: The Doppler shift of the radar return can be measured, to estimate how fast the aircraft traveling towards or away from you, much like the pitch of an oncoming train’s whistle can tell you how fast it’s coming at you. This is displayed as the small white trend line originating from each brick.)Note that the cursors are over the bottom-most brick (closest to our aircraft). The pilot is ready to lock up this target. This will put the radar into a track mode. In track mode, the radar focuses its energy on a particular target. Because the radar is actually tracking a target, and not just displaying bricks when it gets a reﬂection back, it can tell the pilot a lot more about the target. This is what the F-16′s ﬁre control radar display looks like when a target is locked:G/O Media may get a commission15% off Your First OrderGet it for $11 at Hum Nutrition Along the top we have a lot of information about what our radar target is doing:Its aspect angle (angle between its nose position and our nose position) is 160° to the left,and our closure rate is 828 knots. With this information, the pilot gets a much better idea of what the aircraft is doing, but at the expense of information about other aircraft in the area.Note that in the above picture, the bottom-most (closest) target is locked (circle around it), the two targets further away are tracked (yellow squares), and there are two radar returns even further away (white bricks). This is demonstrating an advanced feature of modern radars, situational awareness modes. A radar in SAM combines both tracking and scanning to allow a pilot to track one or a small number of “interesting” targets while not losing the big picture of what other targets are doing. In this mode, the radar beam sweeps the sky, while brieﬂy and regularly pausing its scan to check up on a locked target.Note that all of this comes with tradeoffs. In the end, a radar is only as powerful as it is, and you can put a lot of radar energy on one target, or spread it out weakly throughout the sky, or some compromise in between. In the above photo you can see two vertical bars spanning the height of the display — these are the azimuth scan limits. It’s the aircraft’s way of telling you, “OK, I can both track this target, and scan for other targets, but in return, I’m only going to scan a 40° wide cone in front of the aircraft, instead of the usual 60°. Radar, like life, is full of tradeoffs.An important thing to note is that a radar lock is not always required to launch weapons at a target. For guns kills, if the aircraft has a radar lock on a target, it can accurately gauge range to the target, and provide the pilot with the appropriate corrections for lead and gravity drop, to get an accurate guns kill. Without the radar, the pilot simply has to rely on his or her own judgement.As an example of that, let’s take a look at the F-16′s HUD (heads-up display) when in the process of employing guns at a radar-locked target:It becomes incredibly simple; that small circle labeled “bullets at target range” is called the “death dot” by F-16 pilots. Basically, it represents where the cannon rounds would land if you ﬁred right now, and the rounds traveled the distance between you and the locked target. In other words, if you want a solid guns kill, simply ﬂy the death dot onto the airplane. Super simple.But what if there’s no radar lock? Well now the HUD looks like this:No death dot — but you still have the funnel. The funnel represents the path the cannon rounds would travel out in front of you if you ﬁred right now. The width of the funnel is equal to the apparent width of a predetermined wingspan at that particular range. So, if you didn’t have a lock on your target, but you knew it had a wingspan of 35 feet, you could dial in 35 feet, then ﬂy the funnel until the width exactly lined up with the width of the enemy aircraft’s wings, then squeeze the trigger.And what about missiles? Again, a radar lock is not required. For heat-seeking missiles, a radar lock is only used to train the seeker head onto the target. Without a radar lock, the seeker head scans the sky looking for “bright” (hot) objects, and when it ﬁnds one, it plays a distinctive whining tone to the pilot. The pilot does not need radar in this case, he just needs to maneuver his aircraft until he has “good tone,” and then ﬁre the missile. The radar only makes this process faster.Now, radar-guided missiles come in two varieties: passive and active. Passive radar missiles do require a radar lock, because these missiles use the aircraft’s reﬂected radar energy to track the target.Active radar missiles however have their own onboard radar, which locks and tracks a target. But this radar is on a one-way trip, so it’s considerably less expensive (and less powerful) than the aircraft’s radar. So, these missiles normally get some guidance help from the launching aircraft until they ﬂy close enough to the target where they can turn on their own radar and “go active.” (This allows the launching aircraft to turn away and defend itself.) It is possible to ﬁre an active radar missile with no radar lock (so-called “maddog”); in this case, the missile will ﬂy until it’s nearly out of fuel, and then it will turn on its radar and pursue the ﬁrst target it sees. This is not a recommended strategy if there are friendly aircraft in close proximity to the enemy.As to the last part of your question — yes, an aircraft can tell if a radar is painting it or locked onto it. Radar is just radio waves, and just as your FM radio converts radio waves into sound, so can an aircraft analyze incoming radio signals to ﬁgure out who’s doing what. This is called an RWR, or radar warning receiver, and has both a video and audio component. This is a typical RWR display:Although an aircraft’s radar can only scan out in front of the aircraft, an aircraftcan listen for incoming radar signals in any direction, so the scope is 360°. A digital signal processor looks for recognizable radio “chirps” that correspond to known radars, and displays their azimuth on the scope. A chirp is a distinctive waveform that a radio uses. See, if two radios use the same waveform simultaneously, they’ll confuse each other, because each radio won’t know which radar returns are from its own transmitter. To prevent this, different radios tend to use distinct waveforms. This can also be used by the target aircraft to identify the type of radar being used, and therefore possibly, the type of aircraft.In this display, the RWR has detected an F-15 (15 with a hat on it indicating aircraft) at the 7-o’clock position. The strength of the radar is plotted as distance from the center — the closer to the center, the stronger the detected radar signal, and therefore possibly the closer the transmitting aircraft.Detected at the 12- to 1-o’clock position are two surface-to-air missile (SAM) sites, an SA-5 “Gammon” and an SA-6 “Gainful”. These are Russian SAM launching radars and represent a serious threat. The RWR computer has determined the SA-6 to be the highest priority threat in the area, and thus has enclosed it with a diamond.RWR also has an audio component. Each time a new radar signal is detected, it is converted into an audio wave and played for the pilot. Because different radars “sound” different, pilots learn to recognize different airborne or surface threats by their distinctive tones. The sound is also an important cue to tell the pilot what the radar is doing: If the sound plays once, or intermittently, it means the radar is only painting our aircraft (in search mode). If a sound plays continuously, the radar has locked onto our aircraft and is in track mode, and thus the pilot’s immediate attention is demanded. In some cases, the RWR can tell if the radar is in launch mode (sending radar data to a passive radar-guided missile), or if the radar is that of an active radar-guided missile. In either of these cases, a distinctive missile launch tone is played and the pilot is advised to immediately act to counter the threat. Note that the RWR has no way of knowing if a heat-seeking missile is on its way to our aircraft.Aside from radar, there are other technologies that are used to lock on to enemy aircraft and ground targets. A targeting pod is a very powerful camera mounted on an articulating swivel that allows it to look in nearly every direction. This camera is connected to image processor that is able to tell apart vehicles and buildings from surrounding terrain, and track moving targets. This is the SNIPER XR targeting pod:And this is what the pilot sees when he operates it:The pod is able to track vehicles day and night, using visual or infra-red cameras. Heat-seeking missiles obviously use this same technology to home in on aircraft, and electro-optical missiles use this technology to track ground targets.Lastly, there are laser-guided missiles as well. These “beam riders” follow a laser beam emanating from the aircraft to the target. Many ground vehicles use laser rangeﬁnders as well, and some aircraft include a laser warning system (LWS) that works similarly to an RWR, but displays incoming laser signals instead.How does a ﬁghter jet lock onto and keep track of an enemy aircraft?originally appeared on Quora. You can follow Quora on Twitter, Facebook, and Google+.This answer has been lightly edited for grammar and clarity.
Something went wrong. Wait a moment and try again.
The System Is DownHow To Win At Risk By Using Systems ThinkingSystems Thinking gives you an advantage in almost every area of life - even the game of Risk. Systems thinking is a way of viewing complicated networks in reality in terms of the relationships between the parts and the whole. It is about thinking holistically about such relationships so as to (1) truly understand how they work and (2) change them for the better. The best strategies in life (and in games) come from systems thinking. This is the case because things are far more complicated than they seem at ﬁrst glance and it takes careful attention to come to know them. Systems thinking offers names and categories for understanding the complexity of reality—and you can’t really know anything without ﬁrst giving it a name. Before we apply the power of systems thinking to the game of Risk, let’s take a crash course in systems thinking ﬁrst. Note: I’m assuming you know the rules of the game of Risk, but if not, read the rules of Risk here.The Simplest Example of a System: A BathtubYou might not think of a bathtub as a system, but it is. You ﬁll the tub to the desired level and temperature, constantly adjusting the faucet in response to the feedback you are getting from the temperature of the water and the current amount of water in the tub. Then, when your goal is met, you take your bath and then let the water out—a complete system. Granted, a bathtub is a very simple system, but it serves as an introduction to the discipline of systems thinking. If you want to go deeper, read Donella Meadow’s Thinking in Systems. It is brilliant. Let’s dig into the bathtub example a bit more and use it to examine the discrete parts of a system.Every System Is Made of These PartsStocks: Stocks are the collection of resources or inputs into a system. In a bathtub, the stock is the amount of water in the tub.Flows: Flows are movements between stocks. A bathtub has two ﬂows, the faucet letting water into the drain letting water out of the tub. The amount of water in the stock of the tub is the result of the interaction between the in-ﬂow (the faucet) and the out-ﬂow (the drain).Reinforcing Feedback: Change in systems happen in loops, not lines. These loops take the form of feedback forces that interact with the stocks and ﬂows. Reinforcing feedback, also known as “growth force,” happens when the stocks in a system are increasing. In the bathtub example, the growth force is the water coming out of the faucet, increasing the stock of water in the tub. Limiting Factor: Systems collapse if the growth force is allowed to run unchecked. At a certain point, the system hits a limiting factor. In the bathtub example, the limiting factor is the desired level of water in the tub. You can always spot the limiting factors in a system by looking at the (1) carrying capacities of the stocks or (2) the goals of the players in the system. Balancing Feedback: Balancing feedback, or “balance force,” kicks as the system approaches one of its limiting factors. In a bathtub, as the amount of water in the tub approaches the desired water level, the person ﬁlling the tub reaches out and turns the faucet off. Suddenly, the system is balanced. Anywhere there is a goal, for example, the desired water level in a bath, you’ll ﬁnd forces at work that are trying to achieve that goal by balancing the system.Equilibrium: When a system is balanced, it reaches equilibrium. There are two types of equilibrium: static and dynamic. In the tub example, if the faucet is off and the drain is plugged, static equilibrium has been reached since no ﬂows (in-ﬂow or out-ﬂow) is coming off the stock. Dynamic equilibrium is reached when the total ﬂows into and out of a stock continue, but are equal, as would be the case if the water in a tub was ﬁlling at the same rate it was draining. Leverage: Systems are hard to change because balancing feedback does such a good job of returning the system to equilibrium. Leverage refers to the forces that are applied to a system in an attempt to change it, but, as you will see below, not all levers are created equal, nor equally effective. In fact, pulling on some levers only makes the sleeping dragons of balancing feedback wake up and lock down the system. However, savvy systems thinkers are able to ﬁnd the leverage points in a system that can change it so dramatically that a new balance is reached—perhaps tipped in your favor.But what does all this have to do with Winning At Risk?A lot, as it turns out. The magic starts to happen when you do two things:Map the parts of Risk onto the parts of a system.Analyze the system of Risk to ﬁnd the best strategy according to the rules of systems thinking. Here we go.Step One: Map the Parts of Risk onto the Parts of a SystemStocks: In the game of Risk, your stocks are the number of armies in your countries, the number of countries under your command, and the number of continents you control.Flows: The in-ﬂow in the game of Risk are the number of new armies you get each turn. The out-ﬂow is the number of armies you lose in battle each round.Reinforcing Feedback: There is a strong growth force at play in the game of Risk since the number of extra armies you get is tied to the number of countries and continents you control. So the stronger your armies become, the faster they become stronger. If left unchecked, this would quickly become an exponentially reinforcing loop that would result in the strongest player quickly taking over the game. Balancing Feedback: But, unlike reality, a good game would never let that happen. If you push the system in the wrong way, the system always pushes back. In Risk, the wrong way to push the system is to try to ride the reinforcing feedback all the way to victory, getting more and more continents until you win. However, reinforcing feedback always triggers balancing feedback to kick in and keep the strong player in check. In fact, the stronger any single player gets, the stronger the balancing feedback arrayed against them becomes. In Risk, the main balancing feedback is the opposition the strongest player encounters from all the other players. That is why it is such a terrible idea to grab a continent too early; it awakens the balancing feedback before you are too strong to repel it. (Taking Australia is an exception to this. For some reason, everyone expects Australia to get taken in the ﬁrst two turns and thus the balancing feedback of the other player’s fear isn’t awakened. This exception is probably explained by the fact that you only get two bonus armies from keeping Australia. The small number lulls players into feeling that it is safe to allow the player who controls Australia to keep control of it.)Limiting Factor: Because the balancing feedback is awakened by the perceived advantage any one player has (or is about to have) over the other players, the clearest limiting factor is almost always taking a continent. Remember, the limiting factors are triggers that the system deems dangerous enough to deploy balancing forces against. Because of this, taking a continent can be a bad strategy even though it promises to increase the ﬂow of armies into your stocks. If the other players unite to oppose you, the result will be a diminishment of your stock of armies.Equilibrium: In Risk, equilibrium sets a cap on how strong any one player is allowed to become. If you want to win, you have to wait until the equilibrium has risen high enough to allow you to have large enough armies for quick, devastating strikes. The more unexpected those strikes are, the more successful they will be. People only tend to defend against their immediate neighbors, so they will not be worrying about your country with 30 armies because it is all the way across the board, when the reality is that (depending on what stage the game is in) a country with 30 armies can usually ﬁnd a path to go wherever it wants, leaving a trail of death in its wake. Now that we have mapped the parts, lets try to ﬁgure out what to do with them.In Risk, “the game of global domination,” the goal is pretty clear: take over the world. Determining the goal of a system won’t always be this easy, but this time it’s a gimme. Let’s press on.Step Three: Start with the Goal and Strategize Backward.Starting with the goal of global domination, let’s walk backward and see if we can ﬁnd our way into a viable strategy that will let us reach the goal. I’ll write it as a logical chain of If/then statements.If the goal is global domination, then you need to make sure you have greater strength in armies than any other player.If you need to have more armies than any other player, then you should do everything you can to (1) increase the rate at which you gain armies and (2) avoid losing the armies you receive.[We’ll cover the best way to increase the rate at which you gain armies below, so let’s focus on the second one here.] If you need to avoid losing the armies you receive, then you need to avoid battles, specifically, you need to reduce the number of times you roll the dice as much as possible. And there it is, one half of the winning strategy: Roll the dice as little as possible. You can’t lose armies that never ﬁght.But how do you accomplish this? Just do these things:Consolidate most of your armies on a handful of adjacent countries that are strong enough that no one wants to attack them. Keep the number of armies in the rest of your countries low. This makes you continue to look weak, reduces the number of armies you might lose if another player takes that country from you, and makes your opponents divide their forces if they want to move into one of your weak countries. Initiate as few attacks as possible. When you do initiate an attack, make sure you have an overwhelming advantage. Better yet, don’t attack at all. Remember, you are trying to avoid having to roll the diceIf you have to attack, do so only from a place of great advantage. Nothing wastes more armies than long, drawn-out battles between countries with many armies on them. If you only engage in short battles whenever possible, you will roll the dice fewer times.Let your enemies break up each other’s continents, not you. If you can get people to ﬁght among themselves by not being the one to enforce the balancing feedback (i.e. attack someone to break up their control of a continent), someone else will have to do it. Just wait. Don’t be the global police, let your opponents do the dirty work. Take only one country per turn. Again, let your enemies kill one another’s armies as they squabble for territory. They are doing your work for you. Your job is to grow stronger, not to win battles. Growing stronger slowly lets you keep a strong core of armies in a cluster of key countries without arousing the other player’s worry. “But,” you might be asking, “If I only take one country per turn, how will I ever win the game?” Keep reading.Step Four: Find Leverage that will Avoid Awakening Balancing FeedbackThere are three ways to get more armies in the game of Risk:Of these three, controlling continents is the surest way to awaken balancing forces in the game that will oppose you on your path to global domination. That means if you can grow the size of your armies without taking a continent, you gain leverage that can change the whole game. The ﬁrst two are “safe.” Most of the time, only taking continents way awakens the dragons of balancing feedback.To get this leverage, you have to do two things:Find a way to grow in strength by taking lots of countries (but not taking a whole continent). There is only one area of the board that will let you do this, luckily, it is also the part of the board that no one wants: Asia. Even though you get seven bonus armies for controlling Asia, most players don’t pursue it because it is so hard to hold. That means that it can only be successfully controlled in the ﬁnal stages of the game. Thus, in the early stages, it can be your playground. There are enough countries in Asia to allow your in-ﬂow of armies to continue to rise as the game progresses without triggering balancing feedback.Make sure you get lots of cards for bonus armies. The best way to do this is just to take one country every turn. This will allow you to keep getting cards without spreading your forces too thin. When you get enough cards to get bonus armies, you can (if the timing is right), move on to Step Five.Step Five: Increase Your Stock of Armies Until You Can Make a Fast, Devastating AttackTo review, the strategy up to this point consists of (1) waiting, (2) not spreading yourself too thin, (3) taking one country per turn (4) consolidating your armies on a few, very strong countries, and (5) enabling your opponents to target one another. This strategy is a slow burn in the early parts of the game, but there comes a time when you need to go on the warpath. If you succeed in laying low and not getting in too many battles, you should be able to build up a good number of armies in at least one country (especially with card bonuses). If you play this strategy well, you might end up with even three or four times the number of armies in the average country on the board (in fact, this should be a goal). When you reach that level, you have some decisions to make. Should you go on the offensive. Is it time to take over an entire continent in one fell swoop? Maybe. Is it time to completely annihilate one of your opponents from the board, taking their unspent cards in the process? Perhaps. Is it time to split your force in half, using one half as a beachhead to hold territory in another part of the board? Possibly. It is up to you and will depend on the current situation on the game board.The important thing is to make a choice that will set you up well in the future, but not too well. Even after you make this big move, you are going to want to continue to avoid becoming the object of balancing feedback, which means the mission is still to strengthen your own position without being perceived as the top player. Sometimes this will mean not taking a whole continent. In this case, using your big offensive push to just weaken key opponents can be a good strategy. You’re going to have to use your judgment.The key thing to remember here is that when you reach this stage in the strategy, the strategy isn’t over. Just go back to step one and keep it going. Rinse, conquer, repeat.Step Six: Don’t Forget All This Other StuffPlay the Players Too: The other players are part of the system of the game too. If you are going to have a hope of winning, you have to beat the others at playing the “player-level” of the game. The strategy of avoiding taking continents will serve you well in the player-level of the game because it sets you on a course of waiting at the back of the pack and drafting off of your opponent’s poor choices. The players at the bottom are always on the same team, or, at least, that is the mentality you should encourage every chance you get.Lose Fewer Armies Than Your Opponents: This may sound self-explanatory, but there are two ways to have more armies than your opponents: to gain them faster (in-ﬂow) or to lose them more slowly (out-ﬂow). Most people pour all their attention into getting more in-ﬂow than everyone else (more reinforcements per turn), but there is a competitive advantage to be found if you dedicate yourself instead to getting more armies than your opponents by not losing the ones you get. If you follow the rest of this strategy, the effects of your conservative play should be compounded as you allow and encourage your opponents to hack away at each other and leave you alone. Find the limiting factors. Limiting factors will show themselves as the system begins to try to maintain equilibrium, but you have to learn how to recognize them when they do. Some of those limiting factors will originate on the player-level of the game. When you hear people say things like, “If someone doesn’t take that country away from Sam, he is going to run away with this game” it means Sam has hit a limiting factor on the social level of the game. Fan those sparks into ﬂames. This is balancing feedback waiting to be triggered. Helping other players become the object of balancing feedback is a big part of this strategy.Use the Threat of Your Strength to Control the Game. As you get stronger, specifically, as you have one country with a large number of armies on it, you can begin to exert an inﬂuence on the social level of the game. The other players will begin to wonder what you are going to do with all that strength once you go on the warpath. Use that question in their minds to bend events in your favor. Make an alliance. Leak your plans to key allies. Work together with them to be a part of the balancing feedback directed against the top player(s).Know What Stage You’re In. Risk has stages. Things that are possible in the early stages are not possible in later ones, and vice versa. To succeed, you have to read the stage of the system and make your moves accordingly. Early stages will largely be about jockeying for position on the map, staking out territory, claiming a continent, and encountering light resistance from the other players. The middle stage will be about consolidating your ﬁrst continent (unless you are using this brilliant strategy) and fending off the harrying attacks by people who don’t want you to control that continent. By the late stage of the game, a few players may have been eliminated, each player is entrenched in a corner of the board, and they are trying to build enough strength to take out their fellow players one by one. Watch Out For The Death Spiral: The Death Spiral is another kind of reinforcing feedback at play in the middle and late stages of the game. This happens when one player becomes too weak to fend off the other players and the stronger players try to completely eliminate them from the game, thus taking their unused cards and getting a huge bonus for themselves. Don’t let this happen to you! If it does, revert to the player-level of the game and try to awaken the balancing feedback inherent in the pity of your fellow players. Also, if you are in danger of becoming the victim of the Death Spiral, don’t hoard cards—use them as soon as you can so you don’t tempt your opponents to come and take them. Lastly, if you are one of the strong players, you can use the Death Spiral to your advantage by keeping weak enemy-controlled countries near your strong countries. If one of your fellow players is about to be knocked out of the game, just make sure their last country is within your range of attack, then grab it and the bonus cards that come with it. TopNewCommunityWhat is The System Is Down?About
This article was published online on March 10, 2021.
The United States had long been a holdout among Western democracies, uniquely and perhaps even suspiciously devout. From 1937 to 1998, church membership remained relatively constant, hovering at about 70 percent. Then something happened. Over the past two decades, that number has dropped to less than 50 percent, the sharpest recorded decline in American history. Meanwhile, the “nones”—atheists, agnostics, and those claiming no religion—have grown rapidly and today represent a quarter of the population.
But if secularists hoped that declining religiosity would make for more rational politics, drained of faith’s inﬂaming passions, they are likely disappointed. As Christianity’s hold, in particular, has weakened, ideological intensity and fragmentation have risen. American faith, it turns out, is as fervent as ever; it’s just that what was once religious belief has now been channeled into political belief. Political debates over what America is supposed to mean have taken on the character of theological disputations. This is what religion without religion looks like.
Not so long ago, I could comfort American audiences with a contrast: Whereas in the Middle East, politics is war by other means—and sometimes is literal war—politics in America was less existentially fraught. During the Arab Spring, in countries like Egypt and Tunisia, debates weren’t about health care or taxes—they were, with sometimes frightening intensity, about foundational questions: What does it mean to be a nation? What is the purpose of the state? What is the role of religion in public life? American politics in the Obama years had its moments of ferment—the Tea Party and tan suits—but was still relatively boring.
We didn’t realize how lucky we were. Since the end of the Obama era, debates over what it means to be American have become suffused with a fervor that would be unimaginable in debates over, say, Belgian-ness or the “meaning” of Sweden. It’s rare to hear someone accused of being un-Swedish or un-British—but un-American is a common slur, slung by both left and right against the other. Being called un-American is like being called “un-Christian” or “un-Islamic,” a charge akin to heresy.
From the October 2018 issue: The Constitution is threatened by tribalism
This is because America itself is “almost a religion,” as the Catholic philosopher Michael Novak once put it, particularly for immigrants who come to their new identity with the zeal of the converted. The American civic religion has its own founding myth, its prophets and processions, as well as its scripture—the Declaration of Independence, the Constitution, and The Federalist Papers. In his famous “I Have a Dream” speech, Martin Luther King Jr. wished that “one day this nation will rise up and live out the true meaning of its creed.” The very idea that a nation might have a creed—a word associated primarily with religion—illustrates the uniqueness of American identity as well as its predicament.
The notion that all deeply felt conviction is sublimated religion is not new. Abraham Kuyper, a theologian who served as the prime minister of the Netherlands at the dawn of the 20th century, when the nation was in the early throes of secularization, argued that all strongly held ideologies were effectively faith-based, and that no human being could survive long without some ultimate loyalty. If that loyalty didn’t derive from traditional religion, it would ﬁnd expression through secular commitments, such as nationalism, socialism, or liberalism. The political theorist Samuel Goldman calls this “the law of the conservation of religion”: In any given society, there is a relatively constant and ﬁnite supply of religious conviction. What varies is how and where it is expressed.
No longer explicitly rooted in white, Protestant dominance, understandings of the American creed have become richer and more diverse—but also more fractious. As the creed fragments, each side seeks to exert exclusivist claims over the other. Conservatives believe that they are faithful to the American idea and that liberals are betraying it—but liberals believe, with equal certitude, that they are faithful to the American idea and that conservatives are betraying it. Without the common ground produced by a shared external enemy, as America had during the Cold War and brieﬂy after the September 11 attacks, mutual antipathy grows, and each side becomes less intelligible to the other. Too often, the most bitter divides are those within families.
No wonder the newly ascendant American ideologies, having to ﬁll the vacuum where religion once was, are so divisive. They are meant to be divisive. On the left, the “woke” take religious notions such as original sin, atonement, ritual, and excommunication and repurpose them for secular ends. Adherents of wokeism see themselves as challenging the long-dominant narrative that emphasized the exceptionalism of the nation’s founding. Whereas religion sees the promised land as being above, in God’s kingdom, the utopian left sees it as being ahead, in the realization of a just society here on Earth. After Supreme Court Justice Ruth Bader Ginsburg died in September, droves of mourners gathered outside the Supreme Court—some kneeling, some holding candles—as though they were at the Western Wall.
On the right, adherents of a Trump-centric ethno-nationalism still drape themselves in some of the trappings of organized religion, but the result is a movement that often looks like a tent revival stripped of Christian witness. Donald Trump’s boisterous rallies were more focused on blood and soil than on the son of God. Trump himself played both savior and martyr, and it is easy to marvel at the hold that a man so imperfect can have on his soldiers. Many on the right ﬁnd solace in conspiracy cults, such as QAnon, that tell a religious story of earthly corruption redeemed by a godlike force.
From the June 2020 issue: Adrienne LaFrance on the prophecies of Q
Though the United States wasn’t founded as a Christian nation, Christianity was always intertwined with America’s self-definition. Without it, Americans—conservatives and liberals alike—no longer have a common culture upon which to fall back.
Unfortunately, the various strains of wokeism on the left and Trumpism on the right cannot truly ﬁll the spiritual void—what the journalist Murtaza Hussain calls America’s “God-shaped hole.” Religion, in part, is about distancing yourself from the temporal world, with all its imperfection. At its best, religion confers relief by withholding ﬁnal judgments until another time—perhaps until eternity. The new secular religions unleash dissatisfaction not toward the possibilities of divine grace or justice but toward one’s fellow citizens, who become embodiments of sin—“deplorables” or “enemies of the state.”
This is the danger in transforming mundane political debates into metaphysical questions. Political questions are not metaphysical; they are of this world and this world alone. “Some days are for dealing with your insurance documents or ﬁghting in the mud with your political opponents,” the political philosopher Samuel Kimbriel recently told me, “but there are also days for solemnity, or fasting, or worship, or feasting—things that remind us that the world is bigger than itself.”
Absent some new religious awakening, what are we left with? One alternative to American intensity would be a world-weary European resignation. Violence has a way of taming passions, at least as long as it remains in active memory. In Europe, the terrors of the Second World War are not far away. But Americans must go back to the Civil War for violence of comparable scale—and for most Americans, the violence of the Civil War bolsters, rather than undermines, the national myth of perpetual progress. The war was redemptive—it led to a place of promise, a place where slavery could be abolished and the nation made whole again. This, at least, is the narrative that makes the myth possible to sustain.
For better and worse, the United States really is nearly one of a kind. France may be the only country other than the United States that believes itself to be based on a unifying ideology that is both unique and universal—and avowedly secular. The French concept of laïcité requires religious conservatives to privilege being French over their religious commitments when the two are at odds. With the rise of the far right and persistent tensions regarding Islam’s presence in public life, the meaning of laïcité has become more controversial. But most French people still hold ﬁrm to their country’s founding ideology: More than 80 percent favor banning religious displays in public, according to one recent poll.
In democracies without a pronounced ideological bent, which is most of them, nationhood must instead rely on a shared sense of being a distinct people, forged over centuries. It can be hard for outsiders and immigrants to embrace a national identity steeped in ethnicity and history when it was never theirs.
Take postwar Germany. Germanness is considered a mere fact—an accident of birth rather than an aspiration. And because shame over the Holocaust is considered a national virtue, the country has at once a strong national identity and a weak one. There is pride in not being proud. So what would it mean for, say, Muslim immigrants to love a German language and culture tied to a history that is not theirs—and indeed a history that many Germans themselves hope to leave behind?
An American who moves to Germany, lives there for years, and learns the language remains an American—an “expat.” If America is a civil religion, it would make sense that it stays with you, unless you renounce it. As Jeff Gedmin, the former head of the Aspen Institute in Berlin, described it to me: “You can eat strudel, speak ﬂuent German, adapt to local culture, but many will still say of you Er hat einen deutschen Pass—‘He has a German passport.’ No one starts calling you German.” Many native-born Americans may live abroad for stretches, but few emigrate permanently. Immigrants to America tend to become American; emigrants to other countries from America tend to stay American.
The last time I came back to the United States after being abroad, the customs ofﬁcer at Dulles airport, in Virginia, glanced at my passport, looked at me, and said, “Welcome home.” For my customs ofﬁcer, it went without saying that the United States was my home.
In In the Light of What We Know, a novel by the British Bangladeshi author Zia Haider Rahman, the protagonist, an enigmatic and troubled British citizen named Zafar, is envious of the narrator, who is American. “If an immigration ofﬁcer at Heathrow had ever said ‘Welcome home’ to me,” Zafar says, “I would have given my life for England, for my country, there and then. I could kill for an England like that.” The narrator reﬂects later that this was “a bitter plea”:
When Americans have expressed disgust with their country, they have tended to frame it as fulﬁllment of a patriotic duty rather than its negation. As James Baldwin, the rare American who did leave for good, put it: “I love America more than any other country in the world, and, exactly for this reason, I insist on the right to criticize her perpetually.” Americans who dislike America seem to dislike leaving it even more (witness all those liberals not leaving the country every time a Republican wins the presidency, despite their promises to do so). And Americans who do leave still ﬁnd a way, like Baldwin, to love it. This is the good news of America’s creedal nature, and may provide at least some hope for the future. But is love enough?
Conﬂicting narratives are more likely to coexist uneasily than to resolve themselves; the threat of disintegration will always lurk nearby.
On January 6, the threat became all too real when insurrectionary violence came to the Capitol. What was once in the realm of “dreampolitik” now had physical force. What can “unity” possibly mean after that?
Can religiosity be effectively channeled into political belief without the structures of actual religion to temper and postpone judgment? There is little sign, so far, that it can. If matters of good and evil are not to be resolved by an omniscient God in the future, then Americans will judge and render punishment now. We are a nation of believers. If only Americans could begin believing in politics less fervently, realizing instead that life is elsewhere. But this would come at a cost—because to believe in politics also means believing we can, and probably should, be better.
In History Has Begun, the author, Bruno Maçães—Portugal’s former Europe minister—marvels that “perhaps alone among all contemporary civilizations, America regards reality as an enemy to be defeated.” This can obviously be a bad thing (consider our ineffectual ﬁght against the coronavirus), but it can also be an engine of rejuvenation and creativity; it may not always be a good idea to accept the world as it is. Fantasy, like belief, is something that humans desire and need. A distinctive American innovation is to insist on believing even as our fantasies and dreams drift further out of reach.
This may mean that the United States will remain unique, torn between this world and the alternative worlds that secular and religious Americans alike seem to long for. If America is a creed, then as long as enough citizens say they believe, the civic faith can survive. Like all other faiths, America’s will continue to fragment and divide. Still, the American creed remains worth believing in, and that may be enough. If it isn’t, then the only hope might be to get down on our knees and pray.
Drop any ﬁles to any devices on your LAN.
No need to use instant messaging for that anymore.
When we say it, we mean it. iOS, Android, macOS, Windows, Linux, name yours.
Uses your local network for transferring. Internet speed is not a limit.
Intuitive UI. You know how to use it when you see it.
Uses state-of-the-art cryptography algorithm. No one else can see your ﬁles.
Outside? No problem. LANDrop can work on your personal hotspot, without consuming celluar data.
Doesn’t compress your photos and videos when sending.
Thanks for using LANDrop!
LANDrop Organization doesn’t collect, share, or store any of your personal data. All personal data required for the normal operation of the app are stored entirely and exclusively on your device.
When you send or receive ﬁles to or from other devices, your device’s name, type, and/or local IP address may be shared with the other party. You can choose not to actively share this information by turning off “Discoverable” in the app. This information is not collected, shared, or stored by LANDrop Organization in any way and is only used for the necessary operation of the app.
LANDrop is featured in the following sites (sorted in alphabetical order):
A while back there was a thread on one of our company mailing lists about
SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.
The question was why a sequence of commands involving ssh and ﬁddly quoting produced the output they did. The ﬁrst example was this:
Oh hi, my dubious life choices have been such that this is my specialist subject!
This is because SSH command-line parsing is not quite what you expect.
First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:
Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:
It then joins them with a single space:
This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:
The shell then parses this as two commands:
The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd
because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.
The second example was this:
Following the logic above, this ends up as if you’d run this on the server:
The third example was this:
And this is as if you’d run:
Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to ﬁx. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:
Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:
The recent reveal of Meltdown and Spectre reminded me of the time I found a related design bug in the Xbox 360 CPU — a newly added instruction whose mere existence was dangerous.
Back in 2005 I was the Xbox 360 CPU guy. I lived and breathed that chip. I still have a 30-cm CPU wafer on my wall, and a four-foot poster of the CPU’s layout. I spent so much time understanding how that CPU’s pipelines worked that when I was asked to investigate some impossible crashes I was able to intuit how a design bug must be their cause. But ﬁrst, some background…
The Xbox 360 CPU is a three-core PowerPC chip made by IBM. The three cores sit in three separate quadrants with the fourth quadrant containing a 1-MB L2 cache — you can see the different components, in the picture at right and on my CPU wafer. Each core has a 32-KB instruction cache and a 32-KB data cache.
Trivia: Core 0 was closer to the L2 cache and had measurably lower L2 latencies.
The Xbox 360 CPU had high latencies for everything, with memory latencies being particularly bad. And, the 1-MB L2 cache (all that could ﬁt) was pretty small for a three-core CPU. So, conserving space in the L2 cache in order to minimize cache misses was important.
CPU caches improve performance due to spatial and temporal locality. Spatial locality means that if you’ve used one byte of data then you’ll probably use other nearby bytes of data soon. Temporal locality means that if you’ve used some memory then you will probably use it again in the near future.
But sometimes temporal locality doesn’t actually happen. If you are processing a large array of data once-per-frame then it may be trivially provable that it will all be gone from the L2 cache by the time you need it again. You still want that data in the L1 cache so that you can beneﬁt from spatial locality, but having it consuming valuable space in the L2 cache just means it will evict other data, perhaps slowing down the other two cores.
Normally this is unavoidable. The memory coherency mechanism of our PowerPC CPU required that all data in the L1 caches also be in the L2 cache. The MESI protocol used for memory coherency requires that when one core writes to a cache line that any other cores with a copy of the same cache line need to discard it — and the L2 cache was responsible for keeping track of which L1 caches were caching which addresses.
But, the CPU was for a video game console and performance trumped all so a new instruction was added — xdcbt. The normal PowerPC dcbt instruction was a typical prefetch instruction. The xdcbt instruction was an extended prefetch instruction that fetched straight from memory to the L1 d-cache, skipping L2. This meant that memory coherency was no longer guaranteed, but hey, we’re video game programmers, we know what we’re doing, it will be ﬁne.
I wrote a widely-used Xbox 360 memory copy routine that optionally used xdcbt. Prefetching the source data was crucial for performance and normally it would use dcbt but pass in the PREFETCH_EX ﬂag and it would prefetch with xdcbt. This was not well-thought-out. The prefetching was basically:
A game developer who was using this function reported weird crashes — heap corruption crashes, but the heap structures in the memory dumps looked normal. After staring at the crash dumps for awhile I realized what a mistake I had made.
Memory that is prefetched with xdcbt is toxic. If it is written by another core before being ﬂushed from L1 then two cores have different views of memory and there is no guarantee their views will ever converge. The Xbox 360 cache lines were 128 bytes and my copy routine’s prefetching went right to the end of the source memory, meaning that xdcbt was applied to some cache lines whose latter portions were part of adjacent data structures. Typically this was heap metadata — at least that’s where we saw the crashes. The incoherent core saw stale data (despite careful use of locks), and crashed, but the crash dump wrote out the actual contents of RAM so that we couldn’t see what happened.
So, the only safe way to use xdcbt was to be very careful not to prefetch even a single byte beyond the end of the buffer. I ﬁxed my memory copy routine to avoid prefetching too far, but while waiting for the ﬁx the game developer stopped passing the PREFETCH_EX ﬂag and the crashes went away.
So far so normal, right? Cocky game developers play with ﬁre, ﬂy too close to the sun, marry their mothers, and a game console almost misses Christmas.
But, we caught it in time, we got away with it, and we were all set to ship the games and the console and go home happy.
And then the same game started crashing again.
The symptoms were identical. Except that the game was no longer using the xdcbt instruction. I could step through the code and see that. We had a serious problem.
I used the ancient debugging technique of staring at my screen with a blank mind, let the CPU pipelines ﬁll my subconscious, and I suddenly realized the problem. A quick email to IBM conﬁrmed my suspicion about a subtle internal CPU detail that I had never thought about before. And it’s the same culprit behind Meltdown and Spectre.
The Xbox 360 CPU is an in-order CPU. It’s pretty simple really, relying on its high frequency (not as high as hoped despite 10 FO4) for performance. But it does have a branch predictor — its very long pipelines make that necessary. Here’s a publicly shared CPU pipeline diagram I made (my cycle-accurate version is NDA only, but looky here) that shows all of the pipelines:
You can see the branch predictor, and you can see that the pipelines are very long (wide on the diagram) — plenty long enough for mispredicted instructions to get up to speed, even with in-order processing.
So, the branch predictor makes a prediction and the predicted instructions are fetched, decoded, and executed — but not retired until the prediction is known to be correct. Sound familiar? The realization I had — it was new to me at the time — was what it meant to speculatively execute a prefetch. The latencies were long, so it was important to get the prefetch transaction on the bus as soon as possible, and once a prefetch had been initiated there was no way to cancel it. So a speculatively-executed xdcbt was identical to a real xdcbt! (a speculatively-executed load instruction was just a prefetch, FWIW).
And that was the problem — the branch predictor would sometimes cause xdcbt instructions to be speculatively executed and that was just as bad as really executing them. One of my coworkers (thanks Tracy!) suggested a clever test to verify this — replace every xdcbt in the game with a breakpoint. This achieved two things:
The breakpoints were not hit, thus proving that the game was not executing xdcbt instructions.
The crashes went away.
I knew that would be the result and yet it was still amazing. All these years later, and even after reading about Meltdown, it’s still nerdy cool to see solid proof that instructions that were not executed were causing crashes.
The branch predictor realization made it clear that this instruction was too dangerous to have anywhere in the code segment of any game — controlling when an instruction might be speculatively executed is too difﬁcult. The branch predictor for indirect branches could, theoretically, predict any address, so there was no “safe place” to put an xdcbt instruction. And, if speculatively executed it would happily do an extended prefetch of whatever memory the speciﬁed registers happened to randomly contain. It was possible to reduce the risk, but not eliminate it, and it just wasn’t worth it. While Xbox 360 architecture discussions continue to mention the instruction I doubt that any games ever shipped with it.
I mentioned this once during a job interview — “describe the toughest bug you’ve had to investigate” — and the interviewer’s reaction was “yeah, we hit something similar on the Alpha processor”. The more things change…
Thanks to Michael for some editing.
How can a branch that is never taken be predicted to be taken? Easy. Branch predictors don’t maintain perfect history for every branch in the executable — that would be impractical. Instead simple branch predictors typically squish together a bunch of address bits, maybe some branch history bits as well, and index into an array of two-bit entries. Thus, the branch predict result is affected by other, unrelated branches, leading to sometimes spurious predictions. But it’s okay, because it’s “just a prediction” and it doesn’t need to be right.
Discussions of this post can be found on hacker news (hacker news in 2021), r/programming, r/emulation, and twitter.
A somewhat related (Xbox, caches) bug was discussed a few years ago here.
NymphCast is a software solution which turns your choice of Linux-capable hardware into an audio and video source for a television or powered speakers. It enables the streaming of audio and video over the network from a wide range of client devices, as well as the streaming of internet media to a NymphCast server, controlled by a client device.
In addition, the server supports powerful NymphCast apps written in AngelScript to extend the overall NymphCast functionality with e.g. 3rd party audio / video streaming protocol support on the server side, and cross-platform control panels served to the client application that integrate with the overall client experience.
NymphCast requires at least the server application to run on a target device, while the full functionality is provided in conjunction with a remote control device:
Client-side core functionality is provided through the NymphCast library.
The current development version is v0.1-alpha4. Version 0.1 will be the ﬁrst release. The following list contains the major features that are planned for the v0.1 release, along with their implementation status.
* Streaming online content by passing a URL to the server.
* Support all mainstream audio and video codecs using ffmpeg.
* Playback of media content shared on the local network.
The NymphCast project consists out of multiple components:
The NymphCast Player provides NymphCast client functionality. It is also a demonstration platform for the NymphCast SDK (see details on the SDK later in this document). It is designed to run on any OS that is supported by the Qt framework.
The server should work on any platform that is supported by a C++17 toolchain and the LibPoco dependency. This includes Windows, MacOS, Linux and BSD.
FFmpeg and SDL2 libraries are used for audio and video playback. Both of which are supported on a wide variety of platforms, with Linux, MacOS and Windows being the primary platforms. System requirements also depend on whether only audio or also video playback is required. The latter can be disabled, which drops any graphical output requirement.
Memory requirements depend on the NymphCast Server conﬁguration: by default the ffmpeg library uses an internal 32 kB buffer, and the server itself a 20 MB buffer. The latter can be conﬁgured using the (required) conﬁguration INI ﬁle, allowing it to be tweaked to ﬁt the use case.
For the Qt-based NymphCast Player, a target platform needs to support LibPoco and have a C++ compiler which supports C++17 ( header supported) or better, along with Qt5 support. Essentially, this means any mainstream desktop OS including Linux, Windows, BSD and MacOS should qualify, along with mobile platforms. Currently Android is also supported, with iOS support planned.
For the CLI-based NymphCast Client, only LibPoco and and C++17 support are required.
Mobile platforms are a work in progress. An Android client (native Java with JNI) is in development.
The repository currently contains the NymphCast server, client SDK and NymphCast Player client sources.
To start using NymphCast, you need a device on which the server will be running (most likely a SBC or other Linux system). NymphCast is offered as binaries for selected distros, and as source code for use and development on a variety of platforms.
NymphCast is currently in Alpha stage. Experimental releases are available on Github (see the ‘Releases’ folder).
Some packages also exist for selected distros.
* APK for installation on Android, see ‘Releases’
If pre-compiled releases for your target device or operating system are currently not listed above, you may need to build the server and client applications from source.
The server binary can be started with just a conﬁguration ﬁle. To start the server, execute the binary (from the bin/ folder) to have it start listening on port 4004:
The server will listen on all network interfaces for incoming connections. It supports the following options:
The client binary supports the following ﬂags:
The NymphCast Player is a GUI-based application and accepts no command line options.
Note: This section is for building the project from source. Pre-built binaries are provided in the ‘Releases’ folder.
The steps below assume building the server part on a system running a current version of Debian (Buster) or similarly current version of Arch (Manjaro) Linux or Alpine Linux. The player client demo application can be built on Linux/BSD/MacOS with a current GCC toolchain, or MSYS2 on Windows with MinGW toolchain.
Once the project ﬁles have been downloaded, run the setup.sh script in the project root, or install the dependencies and run the Makeﬁle in the client and server folders as described. Either method will output a binary into the newly created bin/ folder.
To build the corresponding client-related parts of NymphCast, in addition to a C++ toolchain with C++17 support, one needs the dependencies as listed below.
If using a compatible OS (e.g. Debian Buster, Alpine Linux or Arch Linux), one can use the setup script:
Run the setup.sh script in the project root to perform the below tasks automatically.
Run the install_linux.sh script in the project root to install the binaries and set up a systemd/OpenRC service on Linux systems.
Else, use the manual procedure:
Check-out NymphRPC elsewhere and build the library with make lib.
Use sudo make install to install the server and associated ﬁles.
Use sudo make install-systemd (SystemD) or sudo make install-openrc (OpenRC) to install the relevant service ﬁle.
This demonstration client uses Qt5 to provide user interface functionality. The binary release comes with the necessary dependencies, but when building it from source, make sure Qt5.x is installed or get it from www.qt.io.
Or (building and running on Windows & other desktop platforms):
Build the libnymphcast library in the src/client_lib folder using the Makeﬁle in that folder: make lib.
Create player/NymphCastPlayer/build folder and change into it.
The player binary is created either in the same build folder or in a debug/ sub-folder.
Compile the dependencies (NymphCast client SDK, NymphRPC & Poco) for the target Android platforms.
Ensure dependency libraries along with their headers are installed in the Android NDK, under where TARGET is the target Android platform (ARMv7, AArch64, x86, x86_64). Header ﬁles are placed in the accompanying usr/include folder.
Open the Qt project in a Qt Creator instance which has been conﬁgured for building for Android targets, and build the project.
An APK is produced, which can be loaded on any supported Android device.
Now you should be able to execute the player binary, connect to the server instance using its IP address and start casting media from a ﬁle or URL.
The focus of the project is currently on the development of the NymphCast server and the protocol parts. Third parties are encouraged to contribute server-side app support of their services and developers in general to contribute to server- and client-side development.
The current server and client documentation is hosted at the Nyanko website.
An SDK has been made available in the src/client_lib/ folder. The player project under player/ uses the SDK as part of a Qt5 project to implement a NymphCast client which exposes all of the NymphCast features to the user.
To use the SDK, the Makeﬁle in the SDK folder can be executed with a simple make command, after which a library ﬁle can be found in the src/client_lib/lib folder.
Note: to compile the SDK, both NymphRPC and LibPOCO (1.5+) must be installed.
Note: For Android, one can compile for ARMv7 Android using make lib ANDROID=1and for AArch64 Android using ANDROID64=1. This requires that the Android SDK and NDK are installed and on the system path.
After this the only ﬁles needed by a client project are this library ﬁle and the nymphcast_client.h header ﬁle.
NymphCast is a fully open source project. The full, 3-clause BSD-licensed source code can be found at its project page on Github, along with binary releases.
NymphCast is fully free, but its development relies on your support. If you appreciate the project, your contribution, Ko-Fi or donation will help to support the continued development.
Over the past decade, Lisbon’s city hall has regularly shared the personal information of human rights activists with a variety of repressive regimes, exposing them and their families to untold danger.
The practice was exposed Friday after a group of Russian dissidents revealed earlier this week that city authorities had shared their personal data with the Russian embassy and the Ministry of Foreign Affairs in Moscow.
After initially brushing off the incident as a bureaucratic mishap, municipal authorities on Friday admitted it was actually part of city hall’s standard operating procedure: Since 2011 city employees have disclosed the names, identiﬁcation numbers, home addresses and telephone numbers of activists to countries that protesters were targeting.
Authorities had the information because of a local ordinance that requires activists seeking to hold protests to submit that personal information to city hall, which would then forward information to the police ofﬁcers tasked with ensuring the events are carried out in a safe environment.
But the discovery that such information was also being shared with repressive regimes — including Angola, Venezuela and China — has stunned and dismayed Lisbon-based dissident groups, who are now worried that their leading members might be in the crosshairs of foreign governments.
And it’s called into question Portugal’s status as a place of refuge for political exiles, as well as its reputation as a country that defends the right of free expression. Already, Lisbon’s mayor is facing calls to resign and international civil rights leaders in the country are bemoaning the stain they fear this will leave on Portugal’s global perception.
“I found out this morning and I’m honestly in shock,” said Alexandra Correia, coordinator of Portugal’s Tibet Support Group, who told POLITICO that her personal information was shared with the Chinese embassy in April 2019 after she applied for permission to hold a rally in favor of the 11th Panchen Lama, who has been detained by Chinese authorities since 1995.
“It’s especially bizarre because our protest was held on the Largo de Camões, which is nowhere near the Chinese embassy, so city hall can’t even argue that they informed them for security reasons,” Correia said.
The activist said that the revelation had scared her and terriﬁed her daughter, who is now worried about what might happen to family members in Tibet, where Chinese authorities still practice capital punishment.
“This situation can’t be written off as a bureaucratic mishap,” Correia said, adding that she would join other dissident groups in pressuring city hall to respond for its actions. “It’s a grave violation of my privacy, of my fundamental rights as a European, and it’s unacceptable that this has been going on in a democratic country within the European Union.”
In addition to Correia, representatives for the Committee for Solidarity with Palestine told the Portuguese media they had also discovered their information was shared with the Israeli embassy. That group expressed concern over how Israeli secret services might track their members, and cited a Haaretz article reporting on Mossad’s database of activists who speak out against the Israeli government.
The scandal surrounding Lisbon’s information sharing practices has put Mayor Fernando Medina in a delicate political situation just four months prior to the city’s municipal elections.
On Thursday, the conservative opposition candidate, Carlos Moedas — a former EU commissioner for research and science — called for Medina to resign over the incident. Portuguese President Marcelo Rebelo de Sousa called the disclosures “deeply regrettable,” and said that in a democratic nation everyone deserved to have their fundamental rights respected.
Medina took to national broadcaster RTP on Thursday night to publicly apologize for what he deemed a “bureaucratic error,” which he said was the result of the city following “outdated laws.” Although he accused the opposition of using the scandal to get “political leverage,” the mayor acknowledged that city hall’s internal procedures needed to be changed in order to ensure that the situation was never repeated.
Pedro Neto, executive director of Amnesty International’s Portuguese afﬁliate, said that changing procedures was not enough, and demanded that Lisbon go further to protect those it had put in “grave danger.”
“Protests have been held in Lisbon against the imprisonment of millions of Uyghurs in China, and city hall gave the Chinese embassy not only the information to locate the organizers here in Portugal, but to go after their families in China,” Neto said. “It’s unbelievable that our government has been complicit in that repression.”
Neto said that the city of Lisbon now had the moral obligation to do a comprehensive review of all data shared with foreign powers, and to inform all the outed activists. “Evidently, both our interior and foreign affairs ministries need to be involved so that the affected parties can have their protection ensured within Portugal, and their families’ well-being guaranteed abroad.”
The Amnesty International director called the affair a “national embarrassment” and said that it reinforced the impression that Portugal was just a “small country that’s subservient to economic giants.”
“In the same way that Lisbon has failed to defend human rights here, our national leaders have failed to do so during their turn in the presidency of the European Union,” Neto added, referencing Portugal’s period as rotating president of the Council of the EU, which ends in July.
“We could have been standard-bearers for human rights and the founding values of the EU, but instead we’ve kept a low proﬁle, and now we’re ending our presidency with this scandal at home.”