The Future of Windows

Discussion in 'Tech Talk' started by BuzzBuzzYolk, May 17, 2009.

  1. BuzzBuzzYolk
    Veteran

    Joined:
    Jun 24, 2008
    Messages:
    330
    Likes Received:
    1
    Occupation:
    Jedi
    Location:
    cs_office
    Lets pretend there's a car company called "Windows". It builds Windows (which are cars) and does their best to continue to make the car faster, more comfortable, and more entertaining, every 3 years. What can they add this year? The car already has leather seats, A/C, tinted windows, improved tires, and improved aerodynamics. The only thing they can do now is change the style and make the seatbelts slightly more comfotable.

    What is Vista to XP? They changed the style and added "gadgets" and the "sleep" function. That's all the usefullness I've ever gotton from it. From Vista I got my first ever blue screen of death since Windows 95.

    From what I've seen, Windows 7 is a patch for Vista. What more can they do? What will be new in the Windows after that?
     
  2. Roch The Shaman
    Veteran

    Joined:
    Jul 6, 2008
    Messages:
    2,437
    Likes Received:
    6
    Occupation:
    Fix jets
    Location:
    Barksdale AFB
    but....its new.....new, and new is good old is bad. new is cool old is not cool
     
  3. Brownmccoy
    Veteran

    Joined:
    Jun 22, 2008
    Messages:
    3,149
    Likes Received:
    25
    Location:
    Halifax, NS, Canada
    From what I've heard, Windows 7 is what Vista was supposed to be. Windows 7 is to Vista as XP is to Windows ME.

    The bugs have been removed, the security shit isn't absolutely stupid, and... to be honest that's all I know. My friend's a techie and he seems optimistic about it (Which is a rare thing, him optimistic about something.)
     
  4. dr_jay
    Veteran

    Joined:
    Jun 22, 2008
    Messages:
    515
    Likes Received:
    7
    Location:
    Dirty Jerz
    I downloaded the Windows 7 RC last week and let me tell you it's incredible. It is very streamlined and definately doesn't take the resources that Vista did. I will run comfortably on an older dual core processor and 1 Gig of RAM. The UAC isn't quite as annoying either. You can check out a few reviews on maximumpc and the like. Also I believe the push will be starting for DirectX 11
     
  5. EF2
    Veteran

    Joined:
    Jun 22, 2008
    Messages:
    1,307
    Likes Received:
    5
    Occupation:
    Media Photographer
    Location:
    Pittsburgh
    I'd like to try Windows 7 sometime, but my install of Vista is so damn stable, I don't feel like messing with partitions.

    I'll wait for Nvidia to write decent drivers before I update my OS.
     
  6. The Communist
    Veteran

    Joined:
    Jun 22, 2008
    Messages:
    4,380
    Likes Received:
    9
    Occupation:
    Student
    Location:
    Texas
    My exact thoughts.
     
  7. s o k a r
    Veteran Star Citizen Officer

    Joined:
    Jun 22, 2008
    Messages:
    6,431
    Likes Received:
    62
    Gender:
    Male
    They can make it better for the environment.
     
  8. Sirius
    Veteran Crowfall Member

    Joined:
    Jun 22, 2008
    Messages:
    1,620
    Likes Received:
    1
    Occupation:
    Software developer
    Location:
    Bellevue, WA
    Yeah, the nVidia driver thing is possibly the most significant. Guild Wars starts running at something fairly mediocre like 15 fps for me, but if I window it then re-maximise it, I get about 40 fps.

    Why? Who knows.

    As for what happens after Windows 7, that remains to be seen. There are still things that could be done - there was a video Microsoft released recently that showed a bit of that, although it's probably in the more distant future since it requires hardware we don't have yet as well as operating systems to drive it.
     
  9. Xamiazi
    Veteran

    Joined:
    Jun 24, 2008
    Messages:
    2,902
    Likes Received:
    5
    Occupation:
    Paramedic/Navy SEAL
    Location:
    coronado
    Im running the old BETA version of w7, its great! Not only is it faster, but it seems to make my games work better!
     
  10. Grumble
    Guest

    Joined:
    Jun 22, 2008
    Messages:
    580
    Likes Received:
    9
    Occupation:
    Graduate Student
    Location:
    Alberta
    New versions of operating systems will always be needed, if only for the single reason that new hardware comes along. What happens when 128-bit processors come along? Or IPv6 becomes mandatory? You can't just patch for that. There will also be new types of software that are invented that current systems cannot handle. User interfaces will also have to change to handle new input/output techniques that haven't been invented yet. And yes, companies will always find ways to make them 'newer' for the sake of being 'newer', because they simply need to keep selling a product to stay in business. I don't like Microsoft's approach though, Vista seemed like a step back IMO, my installation has performed just awful.
     
  11. hendricks
    Guest

    Joined:
    Jun 23, 2008
    Messages:
    45
    Likes Received:
    0
    if intel had its way you would see the demise of "core grouped" processers in favor of multi linked cores (read about the Larrabee Architecture) which get accessed as needed at runtime. Which requires that the old Windows type (fake parralel processing by waiting on interrupts) environment be replaced by "real" parralel processing OS and applications. Most of Windows "slow" issues is all about waiting for interrupts from device drivers and dependent processes. Windows is a "linear", single operation at a time (although fakes multi by "switching" between "threads") OS and it must disappear in order to enable the next major performance leap. Lotsa luck there Suckusoft (er Microsoft) as it again tries to clean out my wallet with yet another lame OS iteration (W7).
     
  12. Sirius
    Veteran Crowfall Member

    Joined:
    Jun 22, 2008
    Messages:
    1,620
    Likes Received:
    1
    Occupation:
    Software developer
    Location:
    Bellevue, WA
    ...

    you HAVE noticed Windows uses multiple cores (correctly), right?

    The stuff that makes it slow is indeed when processes are blocked waiting for input, or disk access, or something like that - but because these resources are inherently one-operation-at-once all you can do is switch to another process. Which, actually is what Windows (and every other operating system out there that supports multiprocessing) does.

    The interrupt mechanism merely supports this behaviour on single-core processors. It doesn't actually impact performance on multi-core processors.
     
  13. hendricks
    Guest

    Joined:
    Jun 23, 2008
    Messages:
    45
    Likes Received:
    0
    according to Intel's scientists, Windows does NOT use the multiple cores correctly, at least not in a truly parallel processing manner. According to their published research, which i referenced before and you can find the link on their web site, Windows still "fakes it". Windows only uses the 2nd, 3rd or 4th core about 40% of the time and processes executed on the 2nd, 3rd, and 4th cores still wait for the termination of processes on the primary core, primarily because the applications being run aren't written in a manner allowing Windows (the OS) to seperate parts of them onto other core threades and want to use the primary core exclusively, wasting the remaining 60% of each non-primary core's processing potential. guess it is an argument about semantics regarding "correctly". And, sure, you could state this isn't Windows fault, rather, it is stupid application developers, but...unless the OS forces parallel programming efforts developers will always be lazy. which is a whole another issue to discuss.

    as for the interrupt mechanism point, well if the primary core is running a process that is waiting on an interrupt from a hardware device and another core's process needs that device the other core still is made to wait (primarily on the communications bus, e.g. to/from a video card or a hard drive controller). that isn't independent parallel processing (at least by most definitions i have seen; including Intel's). and, yes, i do understand that Windows now relies on mediated bus management, via a MMU (DRAM read/write) and "southbridge" I/O Controller Hub or a Platform Controller Hub chipset between devices instead of interrupt dependency (dos, older windows) but the end effect is still basically the same. actually, this has its own "overhead" load even if it is "offloaded" from the operating system itself.

    anyway, i am not a hardware scientist so i won't try to belabor the point further (and I sure wouldn't try to argue against Intel's scientist's published opinions); but in response to the original question, the answer is still the same: Windows, at least as we know it, must "go away" in order to enable the future technological leap (Intel's opinion at least).
     
    Last edited: May 22, 2009
  14. Ben K
    Veteran

    Joined:
    Jun 23, 2008
    Messages:
    2,835
    Likes Received:
    0
    Occupation:
    Artist
    Location:
    Auckland
    Repost it, it doesn't seem to be in your message history. I'm not sure what to search for on Intel's site; all the stuff that came up seemed to be about corporate responsibility etc etc.
     
  15. Sirius
    Veteran Crowfall Member

    Joined:
    Jun 22, 2008
    Messages:
    1,620
    Likes Received:
    1
    Occupation:
    Software developer
    Location:
    Bellevue, WA
    I've been finding material on the Larrabee core (in a nutshell, it's the Cell processor all over again except on an x86 model and probably with other architectural improvements) but haven't been able to track down anything stating what Windows has to do with it (the only MS coverage on Intel's site seems to be about VS2010 and Windows 7, and seems a little closer to enthusiastic). Thus I can't really appreciate what the issue is, but there are some general points I can make.

    The first is that no operating system can prevent bad practice. Single-threaded applications are easier to write on Linux and Mac OS as well, as they would be on any conceivable operating system. Taking Windows out of the picture wouldn't solve that.

    The second is that I agree the problem with "not enough parallelisation" is there, and that even quad-cores are not being properly used because the applications they are running cannot be divided properly. There are two ways I can think of to solve this problem; the first is better programmer training, and the second is tools that make parallel programming faster and less difficult to get right. Both approaches will work whether you're using Linux, OS X, or yes, even Windows.

    And finally, before I forget; yeah having cores that are blocked by another core that is using the hardware device they want isn't independent parallel processing, but you can't program your way around it. That problem is caused by the simple fact that much of the hardware can only do one thing at a time, even if the CPU can do more. The best you can do is switch that process out of execution and do the work you can do in the meantime (all modern OSes do this already).
     
  16. hendricks
    Guest

    Joined:
    Jun 23, 2008
    Messages:
    45
    Likes Received:
    0
    sirius, concur with what you said.

    except the stuff comparing larrabee to the previous "cell processor". Intel purports that larrabee's goal will enable an OS/application to use "on demand" core utilization of many thousands of cores. the cores can be distributed in the "cloud" rather than across "linked" hardware. so, there are, apparentyy, signigicant differences here (again i am not a hardware scientists so i don't understand all the distinctions). conclusion tho, Windows ain't going to "cut it". no way Windows understands core utilization in the "cloud". if you remember IBM's "big blue" muticpu, x86 linked hardware project that required all the "boxes" of cpus to be in the same room (and, suppossedly, way outran the biggest and baddest CRAY at the time), it is my understanding that larrabee isn't suppossed to care where the cores "are".

    and, Ben, there is an article somewhere on Intel's site that states they don't see Windows as being the future for larrabee. wish i had donwloaded it and saved it as they move stuff around on their sites more than we do at my work!
     
  17. hendricks
    Guest

    Joined:
    Jun 23, 2008
    Messages:
    45
    Likes Received:
    0
    some of intel's interesting points:

    "The Larrabee architecture uses a 1024 bits-wide, bi-directional ring network (i.e., 512 bits in each direction) to allow agents to communicate with each other in low latency manner resulting in super fast communication between cores." this is what, supposedly, can be extended to function between non-colacated "boxes" of cores.

    intel plans to use this tech to enter the graphics GPU market as its first retail endeavor. which seems rather strange, but, in retrospec, i guess multi GPU graphics cards (nvidia 200 series for example) might be a way to convince outsiders of "proof of concept" prior to investing in creating expensive massive computational platforms.

    the card won't relase until H1 2010, so they are behind original schedule.

    Larrabee will be, in its first iteration, a 32-core processor. Each core is expected to be an x86 core, and each will will be paired with a vector processing unit

    it is clear they chose to use it in the graphics market because the OS of any video card is proprietary and not dependent upon Windows for internal operations.

    the intel scientists comment was that they did this because 1) intel needed to recoup the massive amount of money they have spent on this so far and 2) there is no OS that would use it. (damn, i can't find that article eithernow; wonder if he got in trouble for that statement?).

    "The idea seems to be that a Larrabee workstation could use the chip for general business applications, mining the humungous x86 application base". "New compilers are going to be needed to use Larrabee to its best effect. It is reported that Intel will introduce a packaged set of graphics development software tools called Parallel Studio."

    another interesting (unanswered?) issue: "We asked whether there was any limitation to maintaining cache coherence across multiple chips and the anwswer was that it could be possible with enough bandwidth between the two chips." as of today, the "maximum" cpu cores is 64 on one hardware "platform". so... this seems to conflict with other statements regarding "thousands" of cores.

    conclusion, we really don't know how to weed out the hype from the fact.

    however, i still don't see Windows handling 64 cores without vast "upgrading". but i want a workstation with one!
     
  18. BobFromMarketing
    Banned

    Joined:
    Jun 22, 2008
    Messages:
    1,418
    Likes Received:
    1
    This is what I am doing, but then again after SP 1 im pretty damn happy with vista so that puts me apart from most of society. But most people have decided they hate vista because someone told them to and nothing anyone says or facts they bring up will prove otherwise so :rolleyes:
     
  19. Sirius
    Veteran Crowfall Member

    Joined:
    Jun 22, 2008
    Messages:
    1,620
    Likes Received:
    1
    Occupation:
    Software developer
    Location:
    Bellevue, WA
    Aha. I actually am seeing a couple of things that Windows doesn't handle (at the moment) now. First is the use of thousands of cores; new versions of Windows Server can handle 256 ... er, processors, I think. Same for Linux, apparently (I'm sure custom versions could be made to work with more though; IBM had to do it somehow).

    But processors are slightly different from cores. Nonetheless, some kind of limitation might apply since they aren't overly different from a parallelism perspective.

    Now the other thing that I don't think Windows can do (yet) is distributed/non-uniform processing. It lets you split things across processors on the same machine, but not so much "borrowing" processors from elsewhere, let alone doing that efficiently (to minimise the amount of intercommunication - because the network is a huge bottleneck in those scenarios). Even less so if those other processors are actually shared between different computers doing different things.

    Now, of course, you can achieve the same results by running special software to manage those tasks on the other machines (such as cloud computing platforms) - but the tasks have to be specially designed to work that way. For the meantime, at least, that seems to be good enough though.

    P.S. On retrospect, I'm not sure how much design would need to go into it other than a lot of parallelism; that's still enough that few programs are going to see a performance gain from this though.