Look, the journal system and anonymous pre-publication peer review are a disaster for science. The reasons have been documented over and over in the last few years, and if you follow Andrew Gelman's blog you'd have seen hundreds of examples of serious problems. In part, this is a social issue, and I don't have a solution to the social issue (ie. how we promote science and scientists and fund them). But, in part, this is a technological problem. If there were a really good technological solution to publication, we would be better off. So, here's what I think such a system should look like:

1. A decentralized archive of papers, data sets, and public commentary on papers.
2. Each submission is given a UUID and cryptographically signed by all authors, revision history is allowed, and all revisions are stored.
3. Propagation of new articles proceeds in a peer-to-peer fashion. Peers are cryptographically identified by signatures. Certain peers are marked as "trusted archives" by distributed vote. (ie. all the guys at University of Foo mark their official "university of Foo local archive run by the information technology department of Foo" as a trusted archive). Typical submission would be to your local instance, which immediately propagates to your trusted archives. The system is not tied to universities, and archives would be kept by organizations such as professional organizations, government libraries, state governments, nonprofits, even individuals.
4. Metadata is propagated broadly, certainly complete copies of metadata are sent between major archives. Local instances automatically replicate metadata to your local computer based on keywords etc. Search proceeds by first searching your local metadata archive, and then requesting a metadata transfer from your trusted archives on the basis of search keywords, authors, etc, as well as sending peer-to-peer queries to other registered peers (your collaborators at other universities, etc.). Queries have a time-to-live and are propagated at least several steps, but no more than 4 or 5 (avoid exponential explosion).
5. Actual content (papers, datasets etc) are propagated based on policy for the archive operator and/or the local operator. Archives would be really symmetric with individual instances on the basis of how they operate, but obviously would be expected to house much larger data storage and generally accept much more submissions for full archival.

The biggest problem, as I see it, is control of spam/bots. How do you prevent the system from becoming bogged down by automated submission of enormous quantities of meaningless trash? Cryptographic signatures help a little. Using a web of trust would help a lot. University Archives could obviously accept local submissions only from their own university employees and other partner archives (say University of California at Davis accepts transfers from all other UC systems schools, as well as say 20 or 40 other major universities globally such as UMich or WashU StLouis or Cambridge or University of Sao Paolo or U Tokyo or whatever.)

In the end, the way scientific communication would work is essentially that you'd write your stuff up, archive the data set, and submit it to your publication program on your computer. It would then assign a UUID, index all the metadata, and submit to each of your trusted archives, the trusted archives would propagate all the metadata throughout the network, within a few hours anyone in the world could find your article by keywords, authors, etc. Anyone could download it by peer-to-peer requests that eventually find an archive with a copy. Commentary would transfer with the paper, and commentary on a paper would easily be submitted to the system where it too would be archived along the lines of an email mailing-list archive by threaded conversations.

It seems pretty obvious this would be way better from a "moving science forward" perspective than anything we have now.

Going back to the social aspects though, it certainly seems that this wouldn't have anything like the prestige production of "A publication in Nature" or whatnot.

Lots of people who are skeptical of the concept of UBI have concerns about how to fund it. So, I did a little calculation with the American Community Survey Microdata. Here's what I got for 2014 and 2015:

2014$55963$35892.401.810.270.328
2015$58138$38802.391.810.240.34

Some more important numbers:

• GDP/capita is around $57400 /yr/person these days • Population is about 325,000,000 people • The US Budget is$3.9 Trillion, or $12,000 per capita or 21% of GDP • The US Tax Revenue from direct taxes is$2.6 Trillion, or $8000 per capita. • The Tax Revenue from other sources is$606 Billion or $1865 per capita. • The Budget Deficit is$1.3 Trillion or $4000/capita So, how could we create a UBI that was more or less equivalent to the situation we have today in terms of the accounting? (Not, in reality, in terms of the Economics and incentives, and resulting changes to work hours and employment etc obviously, but if we held those things constant and just changed how the programs worked, what would the accounting look like?). We'll use the following variables: We have the following equation to keep the per capita deficit D constant In this equation D is the deficit per capita, N is the average household size, Na is the number of adults, pd1 is the fraction of adults with "level 1" disability, and pd2 is with "level 2" disability (these are just stand-ins for the fact that some people need much more support), Ns is number of seniors, Nc is number of children, GDPC is the per capita GDP with 0.08 being the current fraction of GDP spent on discretionary spending, t is the flat tax rate on income, Ie is the earned income, Ii is the investment income, and To is the other tax revenue per capita. N is the household number of people $N= N_a+N_s+N_c$. Let's plug in some reasonable values • $U_a$ for a standard adult let's put$500/mo as a "tax refund" or basic guaranteed income.
• $U_s$ is $16000/yr which is more or less what we're already paying in SS • $U_c$ is$250/mo reflecting a cost of feeding a child and buying some very basic things.
• $U_{d1}$ for people with partial disability is $1500/mo reflecting the fact that they can work somewhat, with pd1 ~ 0.05 • $U_{d2}$ is for people with serious issues, such as rapid cycling bipolar or multiple sclerosis or whatever, with pd2 ~ 0.05. We put$2500/mo reflecting a basic stable living situation which is cheaper ultimately than having very sick people in and out of the ER.
• $I_e$ the earned income we put as $57000 per household • $I_i$ we put at$3600/household

Plugging these numbers and solving for t the required tax rate I get t = 28%.

My Maxima computer algebra code:

Defeqn:Defecit *Nhh = Ua*Na*(1-pd1-pd2) + Na*(Ud1*pd1 + Ud2*pd2) +Us*Ns+Uc*Nc+Discrpct*GDPC*Nhh - t*(Ie+Ii)-To*Nhh;

Numeqn:subst([Nhh=Na+Ns+Nc,Na=1.8,Ns=0.33,Nc=0.25,pd1=0.05,pd2=0.05,Us=22000,Ud1=1500*12,Ud2=2500*12,Ua=500*12,Uc=250*12,Discrpct=0.08,GDPC=57400,Ie=57000,Ii=3600,Defecit=4000,To=1865],Defeqn);
float(solve(Numeqn,t));


You can run this as maxima code here:

So, with a 28% flat tax we can give every adult in the US $500/mo feed every child, take care of every senior citizen, take care of a disabled population totaling 10% of the full population, avoid all poverty traps, buy all the military and research and census and whatnot, eliminate all but the 1040ez form, have no poverty traps, eliminate high marginal tax rates on second earners in dual income households, and have the same deficit as we have currently. I think that's an absolute STEAL. NOTE: you might argue that we need an additional$6000/yr for seniors to cover some kind of medicare insurance, even when you do that you get a tax rate of 31% still way better than what we've got.

A phenomenon occurred shortly after Donald Trump took office which involved huge numbers of people running out to stock up on canned goods and writing pieces about how Steve Bannon was testing the waters for a Coup via the Muslim Ban order.

Why did that occur? Suppose for example that you have a machine in a room that does a good job of running some missile silo. It's designed well, and it has run well for the last 30 years. Your confidence that it won't accidentally launch nuclear missiles is pretty high.

Now suppose the janitor comes to you and tells you that last night while he was cleaning things up he spilled a bunch of something into the console and while cleaning that up he flipped a bunch of the switches. Suppose there are 20 switches that if you put them into a particular randomly chosen configuration will start a nuclear launch countdown.

Now, the chance of getting accidentally into that configuration is $1/2^{20} \sim 1/1000000$ but checking that it's not set to launch is  the first damn thing you're going to do when you rush into that room.

The point is this:

Historical info about how reliable something is is useless to you when you have new information that something is amiss with the machine. You MUST reset your Bayesian prior over the reliability of the machine to include some probability on every possibility of what state it might be in, and then collect data on the machine and use your model of how it works to re-concentrate your posterior distribution on the internal state.

To decide what to focus on, you need to use Bayesian Decision theory to first check the things that have the largest expected costs associated. So even if it's a 1 in a million chance that you might be nuking the planet into oblivion... you gotta check it because the cost associated if you nuke the world is trillions and trillions of dollars (or utiles or whatever unit of cost you decide to use).

So, it's fully rational when a large shift in how things are going politically leads us to focus on the possibility that everything might be going to hell right away and we need to stock up on canned goods, ammunition, and cricket bats for the zombie hoard.

Fortunately, after seeing the output of the machine for a while, we can re-concentrate our prior. So, conspiracy nuts are not the ones who cry conspiracy when a big change occurs in the government. It's actually Rational to fear conspiracy if there's a big enough change. Conspiracy nuts are the ones who cry conspiracy well after the likelihood suggests otherwise and the expected cost associated with conspiracy should be calculated as negligible.

The "conspiracy nut" has two problems. Typically they start with concentrated priors on conspiracy, and second, they focus their model of how the world works on likelihoods associated with conspiracies (that is, their prior over models is to choose a model where any data can be seen as evidence of a conspiracy).

I googled up cost of living in Mumbai India, just because I wanted to make a point. Here's a simple index:

#### Mumbai:

1 Bedroom City Center apartment: 36,000Rs/mo = $530/mo Utilities for that apartment: 2960 Rs/mo =$43/mo

90 meals at inexpensive restaurant = 21,600Rs/mo = $316/mo Total cost of basic subsistence ~$900/mo.

#### Los Angeles:

1 Bedroom apartment: $1500/mo Utilities for that apartment:$250/mo

90 meals at inexpensive restaurant = $900/mo Total cost of basic subsistence ~$2650/mo

Basic requirements for life (food and shelter) cost 3x as much in Los Angeles as Mumbai

EDIT: Also we should take into account income tax. In the US an individual making $2650 / mo after taxes ($31800/yr) needs to be getting a salary of close to $37650 which is the upper border of the 15% tax bracket, and puts their marginal tax rate for additional earnings at 25% (the next bracket). And that's ignoring state taxes. Let's just call it something like$40k

In India, I don't know much about the tax system, but from wikipedia we can approximate things as needing to make 727k Rs/yr and if I'm reading the clearly broken wikipedia table right that'd be taxed around 13%. So you'd need to make something like 865k Rs = $13,000/yr 40/13 = 3.08 so the main point still holds, it's 3x more expensive to subsist in LA. Traditionally, it's been the case that a lot of human labor has been required to produce stuff. Want a coat? Someone has to design the coat, cut the cloth, dye the cloth, wind the bobbins, and sew all the individual pieces together, pack it in a shipping box, ship it to the store, stock it on the shelf, operate the check-out register... Even if you assembly-line it all for efficiency, it probably takes between 20 mins and a couple of hours to produce one coat depending on the design. And whether it's coats or sausage, tires or mattresses, bread or cut roses, if you wanted to buy something, you had to pay for all the people who gave up their time to make that thing a reality. In that world, a sudden government transfer to you was sure as sugar the same thing as forcing someone else to give up hours of their life to give you something they made in those hours, without compensation. Hence, the totally un-nuanced slogan "Taxation is Theft". And if there is one thing we can't replace, it's hours in your life. We can grow more trees, mine more gold, collect more energy from the sun, but we can't give you a time machine to go back in time and re-do an 8 hour work shift as a snorkeling vacation in the Bahamas with your kids instead. In a world where multi-purpose ultra-robots produce all the everyday "stuff" at an energy cost of a few meters square of sunlight for an hour each day, a government transfer does NOT in any way imply forcing someone else to give up pieces of their lives for your benefit. In this case, at least in terms of hours of your life given up "Taxation is Theft" is flat out wrong! (and I'll get to the issue of natural resources and raw materials in a different post). In an intermediate world, where some people work and others don't, the degree to which a government transfer implies forcing someone else to work without pay is more difficult to quantify. Sure, if the government prints money and hands it to you, and you buy a thing, and that thing has some "embodied time" then some people worked to make it. Were they forced to "give that time up for free?". Well, assuming the tax consequences of working are known in advance, and the existence of the government transfer is known to everyone, evidently they still valued the take-home wages they got from the work more than the time they gave up or they wouldn't be doing that job. The taxes are baked in to the decision. What becomes problematic then, is when the structure or quantity of government transfers and taxation change, and these changes invalidate the plans that people made and they receive less than they expected from all the investment they put in. For example if you went and got a Med School degree and then just as you were taking your first job they socialized all of the health-care system and set a low wage for you and said "you'll have to put up with that or stop practicing medicine" and you either need to give up all that training and go do something you're less qualified for, or put up with the consequences of planning to be a high paid doctor and winding up being a socialized government employee. The big problem with something like "getting rid of social security" is that in the last 20 or 40 or 60 years of people's life they made decisions under the assumption that they would get social security, and so they accepted lower wages than they otherwise would have, on the theory that at least when they're X years old they'll get some relatively understood payments during their retirement etc. This all suggests that really, we shouldn't see a fixed UBI as "taking things from people" (Taxation is Theft!) but instead "enabling our economy to function even when people's labor is not really required". And even if people's labor is still required, provided that we are very consistent and predictable about the rules for the UBI, prices in the market for labor will adjust to meet those expectations, and people will be getting what they bargained for when they take a job. Our new US National Anthem: Stuck on the basic idea that everyone works and buys their stuff through the money they make by wages, naive everyday Socialists often advocate for what they see as a solution: The Living Wage (AKA "a decent minimum wage") This number changes through time of course, but recently it's been assumed to be about$15/hr here in California.

What is the problem with minimum wages? They don't alter the mathematical properties of the system in the right way to achieve a stable equilibrium.

What is needed to avoid the race to dystopia is for people's consumption in a given year to be $C = B + Wt$ where B > 0 is the basic income, $W$ is some wage per hour and $t$ is some amount of labor supplied in that year in terms of hours worked.

With that system in place, as the price $P$ of a basket of goods falls due to efficiency and automation. The amount of goods each person consumes is $\frac{B}{P} + \frac{W}{P}t$. And, even if the amount of labor you put in is $t=0$ because it's cheaper to just make stuff without having people involved, as $P$ goes to 0, you still consume a larger and larger amount of stuff $B/P$.

The problem with the "Living Wage" is that it changes the equation to $C = W_L t$ and so the amount of stuff you consume is $W_Lt/P$ and if $t=0$ constantly as P decreases, the whole thing is zero and stays zero. No amount of increasing $W_L$ can help because as $W_L$ increases, the attractiveness of replacing people with automation increases and the amount of work $t$ demanded by employers will decrease even faster. It's pretty obvious that if you want to put all of the US out of work, you'd simply pass a law requiring a minimum wage of $350 Million/yr and until the rioting fixed it, no-one would work at all. In fact, we don't currently have a situation where we can accomplish everything without having anyone work, but we do have decreasing need for many types of labor, and if we could set a $B$ big enough today, you would be able to survive, perhaps uncomfortably or poorly at first, but survive on $t=0$, and so there is no immediate human demand to increase $W$ above what would otherwise be a market price for each type of work (ie. people don't riot in the streets demanding to have the government force employers to hire them at$15/hr to do stuff that only makes the employer $5/hr or whatever). People can then work at whatever wage prevails in the market without fear of starving to death over the next few weeks. They can only go up from $B$ and they can go up to whatever extent they can find in the market. The wage in the market reflects the value that the employer perceives for the work, and the higher wage jobs attract people to learn the skills that actually have value to society. Wages continue to serve their critical information-aggregation role, a role they don't play when employers are forced to hire people at wages that don't reflect the value of the work. As the cost of providing goods falls due to automation, offshoring, whatever, eventually people enjoy enough cheap goods on their equal basic income $B$ so that giving up time to earn meager wages seems unattractive. Why bother to earn wages when you get$1000/mo basic income and thanks to automated robots building things and running farms, that's enough for everyone in the country to live in a mansion, have a non-polluting flying car to travel around on vacation with, and eat caviar?

In the long term limit where producing things falls to extremely low cost, everyone lives the high life off a constant basic income except those people who have some kind of special skill that can't be automated, and they sell some small amount of labor and get even better lives than the very comfortable lives lived at income $B$.

Furthermore, to the extent that people have a desire for more, they will learn skills that can't be automated, or spend their time discovering ways to automate new things that they want produced. They focus their attention on the most productive place, the one where natural market wages are high. The main way that people would "earn wages" is by discovering ways to tell their robots to produce new types of stuff that no-one had thought of before. We see this already in 3D printing communities and the Free Software movement. If everyone who used a copy of the Linux kernel had to pay Linus $180 or something, we'd all be poorer off, because the marginal cost of producing a copy of the Linux Kernel is as close to zero as dammit. But even though many people get absolutely NO money from contributing to the Linux Kernel, lots of people provide patches and add new features, because they themselves get to "eat" that value they created. The living wage is the wrong solution to a real, serious problem, the problem of changing a linear function $C = Wt$ into an affine function $C=Wt + B$. In the long run, we just need any old B as prices for goods will fall as cost of production falls. In the short term, we do need to think carefully about how to set B. But the big jump is the jump in understanding we need from the population. It needs to be OK for people to get some constant amount of income each month so that we can continue to find ways to automate things and cut the costs of producing goods and cut the amount of human labor we need to consume. Eventually, no one should have to breath welding fumes, operate a bandsaw, or sit in front of a computer all day processing airline reservations. If we can figure out how to automate the process of creative research in Biochemistry too... then all the better. Imagine if you will a robotic machine standing about 2m high with an adjustable height camera mast, 4 appendages, two of which operate like human hands with pressure sensitivity and delicate grip, one of which operates like a power drill/screwdriver, and one of which operates like a multi-function cutting, welding, gluing, nailing tool. The machine has cameras on each appendage, is capable of altering its shape and height to reach anything any human standing on a stepladder or lying on a dolly could reach. Furthermore, the machine is capable of learning many common tasks by watching people, and once one of the machines has learned a task, the "program" for that task can be copied and transferred between machines immediately. In fact, the machines have high speed wireless communication allowing them to not only transfer information between themselves, but to organize themselves into groups to carry out tasks that require cooperation. They can operate associated machinery to replicate themselves, each individual robot could assemble one new robot per day. Parts can be manufactured by teams of robots in manufacturing facilities for thousands of robots per day from relatively raw materials (steel and copper ingots, tubs full of plastic beads, recycled broken glass, each raw material produced by teams of robots working in specialized factories such as steel smelting and petroleum refineries etc etc) Suppose furthermore that there are a variety of means put in place to avoid having these robots become dangerous dystopian Terminators. We'll suppose that some people have worked very hard to create secure protocols for controlling them, because I'm not interested in Terminator dystopia, I'm interested in a point of view about Economics. The machine operates on batteries that take about 15 minutes to fully charge, and operates for say 4 hours between charges when doing everyday medium duty tasks. The quantity of electricity in a full charge is on the order of 20 KWh (about 17k food calories) which costs around$4 today.

Suppose the machine costs today say $100k to manufacture. It services itself (or can service another robot) for any regular servicing, and operates for 20k hrs (2.3yrs) between regular services lasting less than 1 day. Otherwise it can operate 24hrs a day with just brief recharges throughout the day. Now, in this Utopia/Dystopia human labor is essentially meaningless. Everything from industrial welding to car washes to laundry to woodworking to clothes manufacturing to gourmet cooking to firefighting, to truck driving, to painting portraits can be done by robots using far less raw resources, far faster and more efficiently. The marginal cost of getting something done which would take one full day worth of specialized human labor would be something like$5 worth of electricity.

In a country like the US where the basic underlying assumption about the economic organization of society is that "things" are limited, and people should work to provide services to others so that they can earn money so they can buy things... If we take that point of view dogmatically to the grave with us... the Equilibrium situation would seem to be near extinction of the humans.

Why? Well, there will always be some small amount of stuff that can only be done by humans. Whether that's perhaps inventing new gourmet recipes (requires taste receptors) or validating security protocols to keep the robots from killing us all (requires humans because you can't trust the robots) there will be some small amount of work required from humans. But, for literally everything else, you could get by producing things without humans. Yet, "stuff" is still limited, like raw materials. So the equilibrium price of "stuff" is going to go to epsilon > 0 whereas the value of everyday human labor is going to go to exactly zero (you can always get the robot to do the thing cheaper).

So, sure, stuff costs next to nothing, but after a short period of time, humans have absolutely zero income and then zero financial wealth, unless they own significant quantities of natural resources and are able to charge rents for their use.

The equilibrium situation is to use the robots to compost the corpses of the everyday people who starved to death to grow crops to feed the people who happened to own land at the start of everything. And then, the economy wheels along smoothly with people who own land charging rent to the other people who own land, the only goods being exchanged are essentially very raw materials through the medium of money.

Which is to say, as the labor cost of producing stuff goes to zero, the system collapses under the assumption that people should provide valuable services in order to get the things they need to survive.

Now, suppose instead we allow each person in the world to essentially "print" a fixed amount of money in each time period. Let's just measure money in units of this fixed amount, so we can each print say exactly $1/day. Now, in the long time equilibrium, each person can get some amount of stuff. Furthermore, the equilibrium would seem to be that the price of the stuff we produce per person would be$1. That is, ignoring the tiny number of people who actually do specialized stuff like audit the robots to make sure they don't wipe out the human race, everyone gets 1/N share of all the stuff, and as we progress, the total stuff being produced for people increases without bounds while the price of that stuff stays constant at \$1 for 1/N of all the stuff produced in a day.

Whether you let people "print" money, or because it's easier to ensure that no-one is cheating if you have only one entity printing the money and distributing it, or you do some mixture of printing and taxing income, if you ensure that everyone has at least some fixed basic quantity of money coming to them every unit of time, the equilibrium of a world of plenty shifts from Dystopian nightmare collapse of the population to a tiny number (a few thousand ?) of rich people who each own 1/N of the earth's surface and exchange dollars for raw materials, to a Utopian society where everyone simply gets some of the universal plenty.

Which is to say, the Universal Basic Income (UBI) is the mathematical mechanism to eliminate a singularity in the economy at zero income. How quickly that singularity consumes our world will rest on how quickly we put people out of work. It won't take a near-magical robot. Every few years France already has rioting in the streets due to a very low labor force participation rate and high unemployment.

See some Labor Force Participation rate data here

When working with social data it's pretty frequent that you get binned information. Like for example some survey might tell you the quartiles or deciles of the age distribution for males in the US, or the percentage of people whose age is between certain fixed values. If you're lucky you might also find out something about the people within each quartile (such as the mean age).

Suppose you'd like to do some prediction of some quantity based on this kind of information. For example, you might have percentage of people with a given educational attainment born before each year, as a graph, and you might have population quartiles of the current population, and you have some predictive equation for say income based on educational attainment and age, and you'd like to calculate the average income for males between age 20 and 45 today and for males between 20 and 45 years of age 20 years ago.

This is a made up example, but typical of the kind of thing I'm thinking of. In particular, you might like to do something like take panel data and infer trajectories through time for individuals, even though you don't have repeated measures. So for example you might generate virtual people born in 1940 and then have them go through earnings trajectories which put together replicate the panel data in 1960, 1970, 1980, 1990, 2000 and estimate something like what the distribution of household wealth would have been if some kind of policy were different (and you have say some simple causal model for what the savings would have been if the policy were different).

The answer of course is that you use maximum entropy. But the maximum entropy distribution of interest is a complicated one, and you might like to do numerical maximization of the entropy.

If you want to do something like this in Stan, where you're simultaneously doing inference on parameters using Bayesian methods, and finding the parameters for a distribution that maximize some measure of entropy for some prior... how do you go about it? I don't think there is an easy answer. It might be good to come up with a more simple and tractable example problem. So for example, suppose you know that the quartiles of age in some population are 23, 44, and 70 years of age, that the average age between 0 and 23 is 9, between 23 and 44 is 31, between 44 and 70 is 62, and over 70 is 81.

Suppose also that we have some function Q(x), and we want, in Stan, to approximately identify the gaussian mixture model with 3 components (8 degrees of freedom, the mean and SD of each mixture component and the weights of the mixture) that maximizes entropy subject to those 7 constraints, and calculate in the generated quantities the mean value of Q(x) from a sample of 1000 points drawn from the maxent distribution. I'll even add in some leeway as if these numbers above are rounded off, so your maxent only needs to satisfy the constraints to within +- 0.5% and +- 0.5 years.