The jig is almost up now for Veep
and Silicon Valley.
We closed down Girls.
We’re waiting for Game of Thrones to start up again.
Big Little Lies wasn’t to my taste, so we didn’t get far on that one.
Orange is the New Black was for a while but then I got tired of it and there’s a series and a half of so I haven’t seen.
We finished Homeland and are between seasons.
We just finished Better Call Saul and are sadly beginning the long wait for renewal and the next season.
We finished Crashing
and Master of None,
neither of which is guaranteed a new season. I finished Archer Season 8.
That’s a show that seems to have run out of variations on a theme, I could be wrong. I’ll be surprised to see it back, but will watch it if it does. It’s a one-trick pony but it works for me. I finished WestWorld.
We are going back to House of Cards, we’ve got a season and a half left.
I quit that a while ago because, prior to the most recent election, I just thought it was too far-fetched. Now with a Queens landlord as President and his stripper wife as First Lady, not so much. Even if Hillary won, that would still have played into the House of Cards theme. Only a Bernie win would have broken the mold. We live in strange times.
Which kind of sucks. If you want to filter a column, it has to either be an aggregate function of all rows, or it has to be stored in that table or in a path from that table. It can’t be a function of that row. Which kind of sucks, it means that any row level functional properties have to be maintained when you do a row save.
And thus were spent 4 hours of my Sunday.
I posted questions on StackOverflowand AWS Forum. No answer. I even texted people on LinkedIn who are machers at AWS. Nada. Not impressed! Sad!
Posted on Stack Exchange:
Mark Andrew Smith’s PhD thesis from 1994 examines relativistic cellular automata models. Also a 1999 paper by Ostoma and Trushyck examines this topic. One topic not discussed is the information required in a cell to represent photons in transit. Suppose we have cells arrayed in a cube so that each cell has 26 neighbors. Suppose there are cells in the simulation. So it requires bits to represent a cell location. If a photon in motion is currently in a cell, it’s direction can be represented by the location of the farthest cell it will reach on it’s straight-line trajectory. Any cell can originate a photon and can receive photons passing through from any other cell. So each cell must be able to represent bits of information, to represent all photons in transit from all possible sources.
Question: Is there any schema that could represent the set of all photons passing through a cell using less information, with reasonable fidelity?
Question: According to the Pauli Exclusion Principle, any number of photons can occupy a single point in space. In the limit (real physical space), does each point in space contain an infinite number of photons? This would require infinite bits to represent. Storage of infinite bits requires infinite energy. If so, does this pose a challenge to the idea, expressed in Fredkin’s Digital Philosophy, that the universe is in fact a cellular automata, with the limiting speed of light simply coinciding with the “clock speed” of the automata, i.e. the rate at which photons can move from one cell to the next?
Someone attempted to reproduce BDM, had problems and posted on CodeReview StackExchange asking for insight. The dummies there criticized the white space and variable names in his code. I found someone’s blog post with a correct answer and posted it. Sanctimonious and clueless lifers on the site deleted the information. The rules of StackExchange pretty much guarantee that narrow-minded lifers, similar to Wikipedia edit patrollers, will defend StackExchange against any useful content. Oh well. Here’s my answer:
OP is trying to write a Python program to reproduce a claimed calculation result of Bueno De Mesquita (BDM). There is another attempt to reproduce this calculation, in Python, by David Masad, “Replicating a replication of BDM“. Masad provided Python code, and also showed an approximately 20% divergence in the median score, starting from the same example and same inputs and same references. Jeremy McKibben-Sanders then replicated the model, with results matching BDM. Masad added a new post to discuss the coding issues which led him awry. Reading those posts and their code and comparing with above code will lead to correct diagnosis for above code.
From Kindgarten through 2nd grade, my son played in Little League baseball and all players were rotated through all positions during the game, and they discouraged keeping score. Pitching was by machine. Come 3rd grade, things change: The emphasis is now on winning. Players are selected for particular positions that they keep throughout the season. One or two players are selected to pitch, and no others are trained in pitching. The coach is a former minor league player with a focus on winning. If a kid can’t bat (unless it’s the lone girl on the team), he will signal them to walk or bunt. My son hated it, and we just dropped out.
There are two pressures on the coach: One is parents who want to see their kid’s team win at all costs, whose kids are docile enough to accept any position. The other (apparently a great minority) are parents like myself, who want to see their 9-year-olds having fun and learning to play all the positions in the game.
I’ve never thought about this before, but apparently it’s a matter of some debate:
I was taking a look at the 1994 PhD thesis of Mark Andrew Smith on Cellular Automata Methods in Mathematical Physics. I could only find one subsequent paper by Smith, on polymer simulation in 1999 with B. Ostrovsky. I assume he is no longer active. The only other work I found was some apparently self-published work by Canadian engineers in 1999, Tom Ostoma and Mike Trushyk. Like Smith they didn’t publish anything after 1999. It doesn’t seem to be an actively pursued field. The only reason I could find for this lack of pursuit was a comment on the Math Stack Exchange website by Willie Wong stating that
One of the reasons that it may be difficult to model Minkowski space based on cellular automata is that there are no “non-trivial” finite sub-groups of O(3,1), where non-trivial means that it doesn’t just reduce to just a finite sub group of O(3) via conjugation. So while cellular automata can be manifestly be homogeneous and isotropic (so admits a discrete O(3) symmetry), it becomes conceptually difficult to imagine some cellular automata capturing Lorentz symmetry.
Suppose we are trying to model physical space with a cubic square array of automata so that each cell has 26 neighbors. Suppose we imagine a center cell with coordinates 0,0,0 and all other cells have coordinates x,y,z offset from 0,0,0.
Consider the propagation of a ray of light emanating from a cell x,y,z in a direction indicated by x-y plane angle thetaXY and y-z plan angle thetaYZ with respect to x,y,z as the origin for purposes of describing the angle, with intensity I.
Suppose we are doing a simulation of N cells of space. For a given cell x,y,z, N-1 other cells may originate a light ray whose angle is such that it passes through x,y,z. At any given time step, any one of a cell’s 26 neighbors thus has the burden of transmitting N-1 light ray descriptions (x,y,z,thetaXY,thetaYZ,I) received indirectly from other cells on to this cell. It is not immediately clear that these descriptions can be combined in the sense of say “Fetch & Add” in an Ultracomputer.
What is the amount of information necessary to hold within a cell and to pass between cells in a single clock cycle, to represent the flow of light in a physical simulation in a cellular automata in a cubic array with N cells?
The light issue occurred to me because when I use my eyes, while not quite a single cell source, I can still move my head and see information which has some to me from very far and very near distances, and this information is instantly available to me wherever I move my head, which means that every neighboring cell needs to carry information about light travelling from any distance. This appears at first glance to be an infinite amount of information. Also, if one were powering a cell in a cellular automata and each cell needed to hold an infinite amount of information, it would also seem to need an infinite amount of energy to process or represent that information. It all gets very confusing at that point.
I like the idea of using cellular automata to represent physical space with fidelity to laws of relativity. In particular I watched the movie Interstellar, where one of the plotlines was the idea that wormhole travel, while quick for the traveler, would still entail the same time dilation effect as if the traveler did not use a wormhole. It would be fun to use cellular automata simulations to model this effect.