There is. It's called the International Date Line. It's located somewhere between Russia and Alaska (I am not so sure about the exact location, it's not a straight line.)
1. What are the different time zones in USA?
There are four in the Continental US: Eastern, Central, Mountain, and Pacific. They are needed because of the geographic size of the US
2. How to use spatialite and the tz database to tag many points with time zones
You might try to download the shapefile of timezones from here. Import the whole polygon shapefile into your spatialite database, then contruct an update query using ST_Contains() on the points table to find which tz it's in.You will definitely want to setup an index on the timezones table, and use it in your query. (edit...) Referring to the comment below: I think you have the spatial index backwards. You created an index on the timezones table (correctly) but you refer to a index on the trackpoints table (incorrect). It should be:... AND z.ROWID in (SELECT ROWID from SpatialIndex WHERE f_table_name='tz_world_mp' AND search_frame = t.geom)
3. I need a LIST of all countries in the world with their time zones?
I need you to do your own homework. Einstein did not go on Yahoo! Answers to figure out e=mc (squared). He did his own work.. Why are you too special to do your own homework? Screw it, I hope some answers your question in its entirety and you eventually get so lazy later on in life that you do not even go to work.
4. Did anyone notice that CNN mixed up time zones? Is CNN stupid? ?
I did not watch it. CNN tries to make you think the way they do. They are owned the Time Warner and they are stupid. They have gotten to where they are not reporting the news but trying to make it. They are a business they cut Labor like every other business. So they probably did not even notice it themselves
5. Data Warehouse design for reporting against data for many time zones
I've solved this by having a very simple calendar table - each year has one row per supported time zone, with the standard offset and the start datetime / end datetime of DST and its offset (if that time zone supports it). Then an inline, schema-bound, table-valued function that takes the source time (in UTC of course) and adds/subtracts the offset. This will obviously never perform extremely well if you are reporting against a large portion of data; partitioning might seem to help, but you will still have cases where the last few hours in one year or the first few hours in the next year actually belong to a different year when converted to a specific time zone - so you can never get true partition isolation, except when your reporting range does not include December 31 or January 1. There are a couple of weird edge cases you need to consider:Even with those edge cases in mind, I still think you have the right approach: store the data in UTC. Much easier to map data to other time zones from UTC than from some time zone to some other time zone, especially when different time zones start / end DST on different dates, and even the same time zone can switch using different rules in different years (for example the U.S. changed the rules 6 years ago or so).You will want to use a calendar table for all of this, not some gargantuan CASE expression (not statement). I just wrote a three-part series for MSSQLTips.com on this; I think the 3rd part will be the most useful for you:A real live example, in the meantimeLet's say you have a very simple fact table. The only fact I care about in this case is the event time, but I will add a meaningless GUID just to make the table wide enough to care about. Again, to be explicit, the fact table stores events in UTC time and UTC time only. I've even suffixed the column with _UTC so there is no confusion.Now, let's load our fact table with 10,000,000 rows - representing every 3 seconds (1,200 rows per hour) from 2013-12-30 at midnight UTC until sometime after 5 AM UTC on 2014-12-12. This ensures that the data straddles a year boundary, as well as DST forward and back for multiple time zones. This looks really scary, but took 9 seconds on my system. Table should end up being about 325 MB.And just to show what a typical seek query will look like against this 10MM row table, if I run this query:I get this plan, and it returns in 25 milliseconds*, doing 358 reads, to return 72 hourly totals:* Duration as measured by the free SentryOne Plan Explorer, which discards results, so this does not include network transfer time of the data, rendering, etc. It takes a little longer, obviously, if I make my range too large - a month of data takes 258ms, two months takes over 500ms, and so on. Parallelism may kick in:This is where you start thinking about other, better solutions to satisfy reporting queries, and it has nothing to do with what time zone your output will display. I wo not get into that, I just want to demonstrate that time zone conversion is not really going to make your reporting queries suck all that much more, and they may already suck if you are getting large ranges that are not supported by proper indexes. I am going to stick to small date ranges to show that the logic is correct, and let you worry about making sure your range-based reporting queries perform adequately, with or without time zone conversions.Okay, now we need tables to store our time zones (with offsets, in minutes, since not everybody is even hours off UTC) and DST change dates for each supported year. For simplicity, I am only going to enter a few time zones and a single year to match the data above. Included a few time zones for variety, some with half hour offsets, some that do not observe DST. Note that Australia, in southern hemisphere observes DST during our winter, so their clocks go back in April and forward in October. (The above table flips the names, but I am not sure how to make this any less confusing for southern hemisphere time zones. )Now, a calendar table to know when TZs change. I am only going to insert rows of interest (each time zone above, and only DST changes for 2014). For ease of calculations back and forth, I store both the moment in UTC where a time zone changes, and the same moment in local time. For time zones that do not observe DST, it's standard all year long, and DST "starts" on January 1.You can definitely populate this with algorithms (and the upcoming tip series uses some clever set-based techniques, if I do say so myself), rather than loop, populate manually, what have you. For this answer I decided to just manually populate one year for the five time zones, and I am not going to bother any fancy tricks. Okay, so we have our fact data, and our "dimension" tables (I cringe when I say that), so what is the logic? Well, I presume you are going to have users select their time zone and enter the date range for the query. I will also assume that the date range will be full days in their own timezone; no partial days, never mind partial hours. So they will pass in a start date, an end date, and a TimeZoneID. From there we will use a scalar function to convert the start/end date from that time zone to UTC, which will allow us to filter the data based on the UTC range. Once we've done that, and performed our aggregations on it, we can then apply the conversion of the grouped times back to the source time zone, before displaying to the user.The scalar UDF:And the table-valued function:And a procedure that uses it (edit: updated to handle 30-minute offset grouping):(You may want to have a go at short circuiting there, or a separate stored procedure, in the event that the user wants reporting in UTC - obviously translating to and from UTC is going to be wasteful busy work. )Sample call:Returns in 41ms*, and generates this plan:* Again, with discarded results.For 2 months, it returns in 507ms, and the plan is identical other than rowcounts:While slightly more complex and increasing run time a little bit, I am fairly confident that this type of approach will work out much, much better than the bridge table approach. And this is an off-the cuff example for a dba. se answer; I am sure my logic and efficiency could be improved by folks much smarter than me.You can peruse the data to see the edge cases I talk about - no row of output for the hour where clocks roll forward, two rows for the hour where they rolled back (and that hour happened twice). You can also play with bad values; if you pass in 20140309 02:30 Eastern time, for example, it's not going to work too well. I might not have all of the assumptions right about how your reporting will work, so you may have to make some adjustments. But I think this covers the basics