You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!




Results 1 to 9 of 9

Thread: Development of general purpose processors (CISC-> RISC -> ???)

  1. #1
    Member
    Join Date
    Aug 2018
    Posts
    4
    Thumbs Up
    Received: 1
    Given: 0

    Development of general purpose processors (CISC-> RISC -> ???)

    After CISC and RISC, the development of the general processor architecture has basically stalle.

    CISC architecture, represented by x86 instruction set, sticks to its territory for compatibility.Except for a few companies, no one has developed it. Its own structure is also developing very slowly.No new company is currently developing a pure CISC chip.

    The RISC architecture also faces a big dilemma.Its theoretical is based on the statistical results of "80% of the instructions used in the operation of a typical program, accounting for only 20% of the instruction system of a processor. But it doesn't say the sloution of, "what should I do with 80% of the instructions that are not commonly used in a processor's instruction system?" Only to choice the is made on a case-by-case basis when it comes to chip design.

    With the rapid improvement of the wafer technology, it is basically equivalent to the increase of chip area in some aspects. The functions of the processor are no longer as penny-pinching as before. Clearly, the original RISC idea is no longer appropriate(However, pipeline technology developed on the basis of RISC architecture with multiple instructions overlapping execution must be retained). Even some RISC processors include branch prediction capabilities. Is this still a RISC processor if One processor core is not enough, just put more than one processor core on one chip? Thus it can be seen that the current general processor architecture has been far behind the development of wafer technology.

    Why a lot of people nowadays call it "general-purpose processor architecture is very mature? Because there is no relatively new design structure, the various functional modules under the current architecture have been studied thoroughly. But is that really the case? What architecture will emerge from the development of CISC to RISC? What you do not know, do not not exist, to maintain a heart of awe.

    The rapid development of GPU has made it far behind general-purpose processors in parallel computing. What about general-purpose processors? It looks like the CPU and GPU are glued together, and there's no better way to fuse.How do you naturally integrate GPU for parallel processing of big data into general-purpose processors?


    PS: if the above statements have wrong characters, improper words, wrong contents, wrong logic, etc., please correct.


    -------------------------------------------------------------------------------------------------------
    With all that said, it's clear that I have my own solution for what's missing from today's general-purpose processors, but it's not convenient to put it here. The general processor architecture I designed named as ZISC(Zhu's Instruction Set Computer) (祝氏指令集计算机). It's not up to me to decide whether my solution makes sense, and whether it really belongs to the next generation of general-purpose processor architectures.


    ----------------------------------- cut-off rule ---------------------------------------------------------

    In 2007, I implemented a very simple processor based on ZISC architecture on an FPGA development board, which is a validation model. Its workable when run through the assembler on it. (A few simple assembler programs, at that time bought a book, according to write a lexical analysis of the program, the assembly into binary code). That processor model is 8-core.


    Email: zhu1982lin@gmail.com

    1 Not allowed!
     

  2. #2
    Expert
    Join Date
    Apr 2013
    Location
    East SF Bay Area
    Posts
    1,470
    Thumbs Up
    Received: 427
    Given: 389
    Do you have any opinion on the Automata architecture?

    0 Not allowed!
     

  3. #3
    Top Influencer
    Join Date
    Jul 2014
    Posts
    100
    Thumbs Up
    Received: 44
    Given: 19
    RISC did have a guiding principle which was the assumption the whole CPU ran on a single clock. So the basis for the original '701 RISC at IBM was that if the average speedup due to adding an instruction, over a set of benchmarks, was less than the slowdown in the clock due to the extra size caused by implementing the instruction, then it was not worth having. Of course this idea is difficult to measure exactly, but the guiding principle is easy to grasp.

    When it comes to modern chips with more than 800 sq mm and multiple kinds of accelerator in them the principle is quite different. The various parts of the chip may clock independently with a mesh to interconnect them. Accelerators are only turned on part of the time. The locality of clocks addresses the original IBM 701 concern. And with accelerators you can suggest a different rule: did you save power? The chips are power limited. Some accelerators can run 100x lower power for same result as a general purpose CPU, mostly due to organizing data flow to be local, since data fetches consume much more energy than computation. So, if you can offload a task to an efficient accelerator and save 5% of your power, and that took less than 5% of your chip area, you have a candidate for inclusion. If the activity now has lower latency, can be used in more scenarios, or has greater throughput then these are bonuses which may seal the case for including the function. And if that function is not in use 90% of the time, who cares? Your chip is bound to have dark silicon since running all functions all the time would melt it. So long as the function does not waste power when idle, black is the new opportunity.

    Compare it to memory, where 99% of your memory is not used at any one time, maybe even over the course of a typical second. Overall, your memory is like accelerators. Very useful for specific results, idle mostly. Get used to it.

    0 Not allowed!
     

  4. #4
    Member
    Join Date
    Aug 2018
    Posts
    4
    Thumbs Up
    Received: 1
    Given: 0
    In my opinion, all current processors are RISC (X86 internal conversion to risc instructions). The biggest problem facing the current chip is that there is no guiding principle. 7nm, or even smaller process, which led designers to re-enter the designer. The old path of CISC, in order to convert the excess wafer area into faster execution speed.


    An efficient system that will coordinate the various components within the chip, just like the factory's pipeline, not idle.


    The heat dissipation problem caused by the full speed operation of various components of the CPU is another subject, which is not a problem currently discussed. The memory belongs to the storage unit, and the CPU chip is in two operation modes, and the idle mode of the memory cannot be compared with the CPU.


    (Use google translation, if there is a problem, please point out)

    0 Not allowed!
     

  5. #5
    Top Influencer
    Join Date
    Jul 2014
    Posts
    100
    Thumbs Up
    Received: 44
    Given: 19
    The components are not just on the CPU chip, there are accelerators all over the place. The instruction set and the cores have very little to do with coordinating the work, it is much more how the fabric works to connect units, what sort of bandwidth it has, how it connects to IO pins, how do doorbells and interrupts work, is there acceleration for queuing like RDMA or NVMe, is there coherence on the IO, how many sockets can be connected through the mesh extender, are you set up to scale to multiple die in a package, etc. A new core and instruction set may be interesting for a specialized accelerator, but completely miss the point for a general CPU now.

    0 Not allowed!
     

  6. #6
    Member
    Join Date
    Aug 2018
    Posts
    4
    Thumbs Up
    Received: 1
    Given: 0
    Currently, because of the structure is too old, it is not suitable for the development of the wafer industrial process.It leads to all kinds of accelerators. These accelerators can increase CPU speed, but not much.The return on investment is too low.
    My architecture is better than the current general-purpose processor architectures,It can adapt to the current process(7nm),and you can get much faster computing speed than the current chip.

    0 Not allowed!
     

  7. #7
    Top Influencer
    Join Date
    Jul 2014
    Posts
    100
    Thumbs Up
    Received: 44
    Given: 19
    Accelerators do not increase CPU speed. The ones I am talking about are completely separate from the CPU cores, and leverage the same high bandwidth fabric the cores are attached to. The core is only a small part of the modern computer. Its speed is more a consequence of memory bandwidth, latency, and caches than it is about the instruction set, ALU path, or registers. Those "classic" computation things occupy a tiny part of the modern die, and the reason all that other stuff is on the die is because it is important.

    0 Not allowed!
     

  8. #8
    Member
    Join Date
    Aug 2018
    Posts
    4
    Thumbs Up
    Received: 1
    Given: 0
    I am very sorry, I didn't understand your question, I didn't ask.
    My architecture only contains the kernel part of the CPU. IO pins, interrupts, caches, etc. are not included (not without, but the design principle is the same as before).
    I have no experience with the design of IO pins, interrupts, caches, etc. If you are asking this question, I am not able to answer you.

    (Use google translation)
    --------------------------------------------------
    非常抱*,没有看懂*的问题,*非所问 .
    我的架构只包含 CPU 的内*部分. IO引脚,**,cache,**都没有包含在 (并不是没有,而是设计原则和以前一* ).
    对于 IO引脚,**,cache,**部分的设计,我 有经验.如果*是问这方面的问题,我 能力回**.

    0 Not allowed!
     

  9. #9
    Top Influencer
    Join Date
    Jul 2014
    Posts
    100
    Thumbs Up
    Received: 44
    Given: 19
    You may have no experience of how to design a cache or the communication fabric or those other things, that is ok, but to be an architect and suggest changes you do need to understand how those things are used. Architecture is about balance. Anyone with decent training and tools can define a core that runs fast. But it does not run usefully unless it connects to other parts of the system. If you look at a modern SOC and drill down to see what all the elements are you will typically find the CPU cores are just a few percent. As for power, the energy needed to do a floating point multiply is easily 100x smaller than the energy to move the values from and to memory. So, low power and fast are no longer about the instruction set or the core. Those were important 30 years ago. To have an architecture today you need to focus on how to move data efficiently between stations, how to make accelerators which have pipelines designed so data flows where it is needed on short wires, how to balance a diversity of function units. 30 years ago the CPU core was a "one man band". Now, there are many cores and they are just part of an orchestra of specialization. They might not even be the conductor.

    0 Not allowed!
     

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •