Thursday, August 24, 2017

Surface temperature sparsity error modes

This post follows last week's on temperature integration methods. I described a general method of regression fitting of classes of integrable functions, of which the most used to date is spherical harmonics (SH). I noted that the regression involved inverting a matrix HH consisting of all the scalar product integrals of the functions in the class. With perfect integration this matrix would be a unit matrix, but as the SH functions become more oscillatory, the integration method loses resolution, and the representation degrades with the condition number of the matrix HH. The condition number is the ratio of largest eigenvalue to smallest, so what is happening is that some eigenvectors become small, and the matrix is near singular. That means that the corresponding eigenvector might have a large multiplier in the representation.

I also use fitted SH for plotting each month's temperature. I described some of the practicalities here (using different functions). Increasing the number of functions improves resolution, but when HH becomes too ill-conditioned, artefacts intrude, which are multiples of these near null eigenvectors.

In the previous post, I discussed how the condition of HH depends on the scalar product integral. Since the SH are ideally orthogonal, better integration improves HH. I have been taking advantage of that in recent TempLS to increase the order of SH to 16, which implies 289 functions, using mesh integration. That might be overdoing it - I'm checking.

In this post, I will display those troublesome eigen modes. They are of interest because they are associated with regions of sparse coverage, and give a quantification of how much they matter. Another thing quantified is how much the integration method affects the condition number for a given order of SH. I'll develop that further in another post.

I took N=12 (169 functions), and looked at TempLS stations (GHCN+ERSST) which reported in May 2017. Considerations on choice of N are that if too low, the condition number is good, and the minimum modes don't show features emphasising sparsity. If the number is too high, each region like Antarctica can have several split modes, which confuses the issue.

The integration methods I chose were mostly described here
  • OLS - just the ordinary scalar product of the values
  • grid - integration by summing on a 5x5° latitude/longitude grid. This was the earliest TempLS method, and is used by HADCRUT.
  • infill - empty cells are infilled with an average of nearby values. Now the grid is a cubed sphere with 1536 cells
  • mesh - my generally preferred method using an irregular triangular grid (complex hull of stations) with linear interpolation.
OLS sounds bad, but works quite well at moderate resolution, and was used in TempLS until very recently.

I'll show the plots of the modes as an active lat/lon plot below, and then the OLS versions in WebGL, which gives a much better idea of the shapes. But first I'll show a table of the tapering eigenvalues, numbering from smallest up. They are scaled so that the maximum is 1, so reciprocal of the lowest is the condition number.
OLS grid infilled mesh
Eigen10.02110.01470.06950.135
Eigen20.03690.02750.1380.229
Eigen30.04230.04690.2120.283
Eigen40.05720.04990.2440.329
Eigen50.0840.0890.2480.461
Eigen60.1040.1070.3730.535
Eigen70.1080.1460.4060.571
Eigen80.1240.1640.4290.619
And here is a graph of the whole sequence, now largest first:


The hierarchy of condition numbers is interesting. I had expected that it would go in the order of the columns, and so it does until near the end. Then mesh drops below infilled grid, and OLS below grid, for the smallest eigenvalues. I think what determines this is the weighting of the nodes in the sparse areas. For grid, this is not high, because each just gets the area of its cell. For both infilled and mesh, the weight rises with the area, and apparently with infilled, more so.


Here is an active graph to show the errant modes. You can cycle through "Style", which means style of integration (grid, mesh etc) and mode, starting from 1 (buttons top right).



It's dominated by Antarctica; the lowest modes focus, with some Arctic activity too, and it isn't for a while that modes bob up in Africa, with some effect in S America. The weakest style (OLS) is almost all polar in the first 9 modes, while mesh starts showing Africa from about mode 4 up, and later Brazil shows up.

Here is the WebGL plot - I show just the mesh style. It gives a better proportion for the polar behaviour, and shows finer features elsewhere. It is the usual trackball, with radio buttons for the modes. Dots are the stations.



Next post will take this further. I'll do a more systematic look at which styles work best in which circumstances. The next main interest is whether I can get better resolution by restricting to a space without the problem nodes. In principle, one could take a very large collection of SH, and collect the eigenfunctions, which are truly orthogonal with respect to the integration style. A subset with moderate eigenvectors would still have a large orthogonal basis.


9 comments:

  1. Talking about spatio-temporal relationships, here is a lone ranger research effort that demonstrates how much we can still discover with respect to geophysics

    ContextEarth.com/2017/08/24/lunisolar-forcing-of-earthquakes

    Had some discussion on whether he somehow mistakenly or accidentally plotted X = X, but I don't think so.

    ReplyDelete
  2. What I'd like to know is if there is à way to assess if integration is done correctly. In general, is there à way to assess how à dataset is able to reflect the Real World ?

    ReplyDelete
  3. "What I'd like to know is if there is à way to assess if integration is done correctly. In general, is there à way to assess how à dataset is able to reflect the Real World ?"

    The way to do this is to run cross-validation tests. If you have a physics-based model, this works very well because you can test if your integration extrapolates from the training interval over to the test interval.

    For a standing wave phenomenon such as ENSO, there are two aspects, a spatial one and a temporal one. The spatial aspect is super easy to cross-validate, as the ENSO forms an almost perfect spatial dipole that shows opposite signs at Darwin and Tahiti. So that at any one time, one can show that if you have knowledge of the temperature or atmospheric pressure at Darwin, you can accurately predict the temperature or pressure at Tahiti by reversing the value of the anomaly.

    For ENSO temporal cross-validation, this has been a longstanding challenge and a problem that no one has been able to solve. But I can demonstrate it convincingly here by assuming the lunar tidal forcing:

    http://contextearth.com/2017/08/08/enso-split-training-for-cross-validation/

    With this lunisolar model of ENSO, one can take any time interval of the ENSO measure and predict the value at any point in time backward or forward.

    ReplyDelete
    Replies
    1. Thanks. Maybe you're talking more about the difficulty of climate models to predict El Nino. I was rather considering global temperature datasets and different methods to give a global estimation of the world temperature.
      Nick, you may have already explained this in a precedent post but I was wondering which method you consider as the most accurate to give a global mean temperature.
      As you do yourself integration of GHCN, you probably have an opinion about Gistemp, Hadcrut, NOAA, JMA and Berkeley. Statistical methods have probably an influence on the final result. I read kriging was one of the most accurate interpolation technique but I'm far from being a specialist, that's why I'm asking.
      Gistemp has also an interpolation method, though it may be more related to neighboring stations rather than statistical correlations as Cowtan's kriging ? Giss also uses satellites to improve it's data.
      As for Berkeley, it seems they do also statistical treatment of data to incorporate more stations. Anyway, I may be wrong with what I said but I was wondering if there is a way to assess all those methods.I believe you try to make your TempLS as accurate as possible and you may have something like a quality standard in mind.

      Delete
    2. Actually the only way that one can evaluate whether an integration works on an extrapolated interval is by using a physical model. Otherwise, it is basic interpolation using knowledge of the neighboring points. If there are enough next-near neighbors one might be able to infer some sort of sinusoidal signal, but again that would only work effectively for something as obvious as the ENSO dipole. There might be some other dipoles, such as the Arctic Oscillation or North Atlantic Oscillation that you could asses whether it would work in the "Real World"

      ENSO is a partial differential equation and being able to separate the spatial from the temporal allows one to do two independent integrations. Remember that the temporal oscillation will screw up whatever you do for the spatial extrapolation unless you take that into account, i.e predicting the sign of the excursion at any one point in time. And remember that ENSO has a HUGE and SIGNIFICANT effect on global temperature at any point in time.

      I am operating in the real world with all the physics that I can apply.

      Delete
  4. Pep,
    "Nick, you may have already explained this in a precedent post but I was wondering which method you consider as the most accurate to give a global mean temperature."
    Well, I still think irregular triangular mesh is best. I look for other methods partly to seek agreement, and also because, as with these modes, they can tell you something else. I don't think there is really much difference, though, provided you do something about empty cells. GISS interpolates, which should work. I think kriging is fine, but overkill. Every point which isn't measured is, or should be, estimated from local values. There is sufficient noise that looking for perfect interpolation isn't really hlping much. I did a comparison of methods here. It's one approach to a quality standard.

    As for BEST, I think what you have in mind is their use of least squares to avoid requiring a fixed anomaly base interval, which tends to exclude stations which don't have ata there. I have used that (pre-BEST) in TempLS too, and I think it is the right thing to do.

    ReplyDelete
    Replies
    1. Thanks ! Empty cells should be looked closer. For the layman that's not obvious at all. When even scientists struggle to assess methods employed to infill empty cells (interpolation, kriging). An explanation of your method for empty cells compared with GISS and BEST would be welcome, even if I believe you already make a lot of effort discribing your work. An "Interpolation for dummies" explanation is required to better appreciate the discussion here.
      In this context, that would be nice if you could explain with simple words why you think irregular triangular mesh is the best method.

      Delete
    2. Pep,
      I tried to cover this in this recent post. I'm working on a post comparing the various methods quantitatively - there is a previous effort here. The task of global averaging is to make a best approximation for every point that you don't have measurements for, and integrate it. With gridding, you approximate unmeasured points within a cell by the average of points that you have measured in that cell, which is fine. But if you just leave out the cells without data, you are effectively approximating them by the global average, which is not fine. They should also be approximated by some average of local cells. It's best done as a two step process - infer an average for the cell, and then atribute that to the unmeasured points. It doesn't really matter in detail how you do that, as long as the average is local. Kriging is used in mining when you really want to know a lot about a particular point - where you want to drill - and so it is worth a lot of work to optimise. But with integrating, there is a huge number of points to add up, and you don't have to work so hard to get the minimum variance estimate. Not that kriging makes a huge difference there anyway.

      On this basis, I like triangular mesh because it estimates each unmeasured value by three known nodes that are among the nearest (the Delaunay property helps ensure that is true). There is a penalty for sparse data leading to large triangles, but that would hurt any method. It also gives a continuous approximation, while a grid does not. That means that you aren't giving very different estimates as you cross arbitrary lines. The estimate could be stabilized by taking into account more points but, as said, this doesn't matter when you are summing large numbers of them.

      Delete
    3. Thanks for this explanation. So It seems to me your method is not so different that Giss. Looks like interpolation with triangles instead of boxes and more refined calculations. But while boxes are not designed to fit GHCN stations, triangles follow their path (the three known nodes). I've got the feeling that may change the final result a little bit even if I don't really understand how.
      Your next post comparing the various methods quantitatively will surely be interesting anyway.

      Delete