Are Economists Wrong When They Solve the “Firm’s Problem”?
Thanks to those who chimed in for my test question in a previous post (which you will need to review if you want to understand the present post).
So, people agreed with me that in the question I reproduced, the “correct” answer was to have each type of firm maximize profit, meaning the actual amount of dollars.
Then, to answer the second part of the question, you figure out how many firms go into wheat vs. corn production by seeing what number n makes wheat producers earn (roughly) $2 profit just like the corn producers do. (It’s not exactly $2 profit for the wheat producers, but if one more corn producer jumped in then the profit of wheat producers would be lower than $2.)
Now, here’s what’s weird: At the “equilibrium” n, you’ve got (as we said) corn and wheat producers earning (about) $2 profit each. But, they don’t hire the same number of laborers in order to produce that outcome. So that means one type of producer earns a much different *rate* of return on invested capital than the other. So how is this an equilibrium outcome?
(NOTE: I think I know the answer, but I’m just explaining the apparent problem in the way we economists typically solve one of these problems. We have the firm maximize the absolute dollars of profit, when we know in real life investors would shift into an industry where the *rate* of profit is highest. Nobody cares about the dollar-amount of profit, irrespective of how much you need to spend upfront on factor inputs. So, to repeat, I think I know how to resolve this apparent tension, but I will wait to see what others think before I chime in.)
I have not seen the problem statement, but I agree that the solution you’ve posted (which is also my solution, and the solution I feel certain they were looking for) assumes that labor is the only factor of production, i.e. the cost of labor is the only relevant cost here.
I do agree that the mention of the interest rate seems contradictory in spirit to the assumption that there are no capital costs. So if that’s your point, I think you win.
Steve, I didn’t reproduce the whole problem in the interest of brevity. The interest rate comes in at part (c) where they ask you to give the market price of a corn firm versus a wheat firm. (So presumably you figure out the net income per period and do the PDV.)
But, look at what you’re saying. You know it takes wages upfront to hire laborers. In particular, a corn firm has to spend $1 in wages in order to generate $2 in net profit. Yet, the wheat producer spends a different amount in wages, in order to generate (close to) $2 in net profit.
(I don’t have the solution in front of me so I forget what the optimum level of laborers for wheat firms is.)
So that can’t be right? How can you say to your students, “In equilibrium, corn farmers will pay $1 in wages in order to yield net income of $2, while wheat farmers will pay a different amount in wages in order to yield net income of $2.” ?
When we say investors are looking at different industries to put their capital in, we don’t have them ask, “Where is the highest dollar amount return?” Instead they say, “Where is the highest *rate* of return.”
Note, I’m not saying the underlying math is wrong. I’m saying these models contain some assumptions that I have never thought about before.
Bob: The standard answer to this is that we assume the workers are paid simultaneously with the production of output. I understand why you might be uncomfortable with that assumption, but it seems to me that there are a wide variety of industries where it’s pretty reasonable.
OK thanks.
I’m not sure I follow Steve’s point.
Even if we accept his assumptions then we may still have (using different numbers to the original question):
Corn producers: Pay $10 in labor , sell the corn for $12 for $2 profit.(20% rate of profit)
Wheat producers: pay $20 in labor, sell the wheat for $22 for $2 profit (10% profit)
Wouldn’t firms want to switch from wheat to corn where the rate of profit is higher ?
(BTW: Its quite possible I have missed the whole point here!)
It strikes me that if you ask the question “Calculate the profit-maximizing levels of output for a corn- and a wheat-producing firm, respectively” then you get the results given, and its a good math exercise.
But if you want a question that better reflects how business investment actually works the question would be “Calculate the rate-of-profit-equalizing levels of output for a corn- and a wheat-producing firm, respectively”.
And I see now that if you assume (as Steve suggests) that you pay the wages of the workers out of the sales revenue then you probably are only interested in absolute profits, not rate of profit.
If the pay is simultaneous with output, the rate of profit is NOT higher in corn. There is no investment being done to have a rate of profit on. We just collect the money for our crop, pay the workers, and pocket $2 pure profit. Since we are just paying them from our proceeds, we don’t care if we have to collect a billion dollars to pocket our two. (Of course, this model is unrealistic because we generally do have to pay the workers in advance, put up other capital, etc. But per its assumptions, it “works.”)
Throwing a stupid red herring into an exam question is entirely reasonable, and it demonstrates knowledge if the test subject can reply with a suitable justification for rejecting the red herring.
Murphy,
For the standard type of this problem, are the assumptions not made that each firm is indistinguishable from every other, and that it is not the gross profit that is equalized, but the rate of profit, or MARGINAL revenue? I don’t remember seeing these profit maximization problems that tell us to equalize the gross profits across heterogeneous firms.
MF they are homogeneous firms. They can instantly switch to corn or wheat production. And you always assume a firm maximizes the absolute amount of profit. It’s the latter approach that now strikes me as very weird, even though I had never thought of it like that before. You can make it correct in the way Steve Landsburg suggests, but especially in an agricultural model, that seems so far removed from reality that I think it’s not helpful. It’s teaching the wrong concept I think, rather than making a harmless mathematical assumption for analytic tractability.
Thanks, OK that helps. There are two choice criteria here, labor and what to produce.
Seems to me that with all prices fixed, and no capital market to speak of… this WOULD be the equilibrium outcome. Normally prices provide communication across heterogeneous industries and capital would move around to seek out the best return. However in the problem as stated, no such communication is possible… each firm is isolated from the others.
That pretty much fits the Austrian theory as I understand it.
Indeed, I’d go one step further and predict that if we could find a real economy with price fixing in place, and something that restricts mobility of both capital and labour, we should expect to see similar outcomes where “equilibrium” allows for a significant difference of return between industries.
There is no capital so no rate of return on capital. If a capital market or for that matter a labor market (why is there a wage differential?) you wouldn’t have that. You sort of seem to want to address it by going back to something like the wages fund theory but I think the simpler answer is they just want them to do a little math on a production function and this all changes if there’s a capital market and a labor market.
Daniel, you don’t think agricultural workers get paid before the product is harvested?
Bob, take 2 restaurants across the street from each other. Restaurant A charges $10 per meal and everyone tips their waiter 10%, restaurant B charges $11 per meal, pays the waiters $1 per meal they serve and forbids tipping. They each sell 100 meals and make $1 per meal. If we followed your logic then Restaurant A has revenue of $1,000 and profits of $100 for a 10% “ROI”. B has revenue of $1,100 and profits of $100 and a 9.1% “ROI”, even though the exact same amount and distribution has occurred.
The mistake is to use revenue as a proxy for “investment”, they aren’t the same thing.
Baconbacon I didn’t make a mistake. The way to salvage the question is to make an unrealistic assumption, namely, that workers are going to plant crops and then wait around until harvest time before they get paid.
In the restaurant example I provided there is no such issue with the timing of the payment. In what way is it meaningful to compare the ROI A vs B? Would an entrepreneur ever take that % into account when trying to figure out if he should go for one model or the other?
BaconBacon, show me what I did wrong in my example. If farm A pays a worker $1 and then some time later gets $2 in revenue, while farm B pays workers $1.50 and then the same amount of time later gets $2 in revenue, you’re saying I’m mixing up “revenue” and “investment” and really people should be indifferent between investing in the two farms?
Start with the example where workers are paid at the time of the sale. Direct pay from the customer results in the same distribution as indirect pay, but a different ROI. A rational entrepreneur would have no preference and would not be swayed to try one model over the other due to a different ROI.
The time factor is misleading, since all capital comes out of savings. Workers are always paid after production. Workers that are paid prior to production are only done out of savings, which essentially makes them borrowers and so their pay will reflect the lost time value of the capital (at equilibrium).
Reverse question
If two workers showed up offering to sow your corn for you, would you be indifferent between the one who demand wages now or the one who would accept wages after the harvest? The one who demanded wages now would have to take a discounted rate to get the job. You can reread the “cost of labor” in the equation as “the cost of labor at harvest time, and any labor paid before then will be done so at a discounted rate equivalent to the prevailing interest rate”.
Baconbacon I’m in the middle of something and have to think about this for a second, but do you agree that if I can show you that my original point still stands, even once we introduce the option of workers getting paid at harvest–SO LONG AS THERE ACTUALLY IS A LAG BETWEEN THE APPLICATION OF THEIR LABOR AND THE HARVEST TIME–then you will admit I’m right? (Rather than just tweaking your critique and continuing as if I’m still doing something wrong in the post?)
@ bob- if you can show how one farmer could make more money than another with this demonstration, then yes.
Yes! This was a great exercise baconbacon I’m glad you suggested this. I had thought about this wrinkle too when I first read Steve Landsburg’s response. (I.e. I wondered what would happen if workers agreed to wait for finish product and we adjusted wage rates accordingly.)
I am claiming at this point total victory, but I will write it up in a new blog post when I get some free time. I don’t expect you to take my word for it. 🙂
So, i’m confused with this. in this example, farm A gets 2-1=1 in dollar profits (simplyfying the time dimension), and farm B gets 2-1.5 in profits (after using the same time discounting), their profits are no longer the same!
GaddyD farm A gets $2 in profit. I.e. revenues are $3 and labor expenses are $1. (I think? Doing it from memory.)
I look forward to it, as long as you aren’t attempting to invest the difference between the high labor farmer and the low labor farmer.
Absolutely businesses take ROI into account. I programmed for a hedge fund: we certainly looked at the amount of capital it took to earn X dollars.
Yes, they do. Also, households both save and borrow, and firms do both as well. But if we model a loanable funds market where only households save and only firms borrow… well, so long as we don’t mistake the model for reality…
I do think that and I understand your point, but my point is that we don’t model things that way and I think there are simpler things that we do model (that have been omitted from this model) that are behind unequal returns on capital.
“. But, they don’t hire the same number of laborers in order to produce that outcome. So that means one type of producer earns a much different *rate* of return on invested capital than the other.”
Can you explain this? how can you even calculate return on invested capital if the firms don’t use capital as a factor of production? isnt this rate just “profits/units_of_capital”? isnt this undefined?
GabbyD if I pay $1000 to workers today, in order for them to give me a product that I don’t sell until next year, then I am investing financial capital in the workers. I want to see what the return on my $1000 is.
Whenever I teach models, I just spend a couple of minutes saying, “Of course, there is no reason to think demand schedules are really straight lines…” and so on. A lot depends on what you are using the model *for*. I’ve never taught this model, but of course, if it is teaching the wrong lesson, it is not a good teaching model!