You guys are still missing the point of this. It may help to understand some of the commitments you must hold to even get the paradox off the ground.
Again, this is a paradox that deals with decision theory, and within decision theory, we often use dollars as a measure of utility.
So, for example, to be more confident in h than in g just means that I am willing to take a bet that {$1 if h, $0 if ~h) over the bet {$1 if g, $0 if ~g). But for this kind of utility to work, we have to take it with certain commitments:
1) You value only money.
2) You value each dollar just as much as the next - that is to say that even if you were a billionaire, you will still value a dollar just as much as if you were broke.
This means that we want to maximize our utility - the number of dollars we can give in a given scenario. So while on a practical level, most of us would be happy with a million bucks, this doesn't match (2) from above. By (2), we would be happy with a million, but we wouldn't be satisfied if there's a chance to get another thousand on top of that.
So the response:
human nature would lean towards the 1 million
is correct, but it's not human nature we're assessing - it's decision theory.
In case you're wondering the "aradox" is self causing effect
No, that's not the paradox at all. An event that causes itself is logically incomprehensible, not a paradox. I will try to (briefly) explain what generates the paradox.
So I want the most money from this game because I value only money and I value each dollar as much as the next.
Solution 1: Take both boxes
The maximal utility here would be to have the opportunity to make $1,001,000, and the only way to do this is to take both boxes. Thus, I have overwhelming reason to take both boxes. But knowing the being has very likely predicted this, I have overwhelming reason to only take the closed box. But if the being has predicted this, I now have overwhelming reason to take both boxes - and so on.
So if this strategy returns "Take both boxes" as true, it ends up being false (and then true and then false again...)
Solution 2: Take the closed box
In this case, I would expect there to be $1,000,000 in the closed box. But there would also be another $1,000 in the other box. So if I decide to take the closed box to achieve maximum utility, I would then have to decide to take both boxes. Thus "Take both boxes" under this interpretation is false. But then it becomes true and false and true and so on...
And thus we have a paradox for decision theory.