Jump to content

Welcome to eMastercam

Register now to participate in the forums, access the download area, buy Mastercam training materials, post processors and more. This message will be removed once you have signed in.

Use your display name or email address to sign in:

will there be a 64bit version of Mastercam?


Mackius
 Share

Recommended Posts

Guest CNC Apps Guy 1

Though I won't speak for CNC, I'll comment that these are and will be for some time, relatively obscure. I would venture to say that until 64bit becomes mainstream (double digit percentage of desktops), to me it would not make sense to put much development into something like that. I would think that Multiple CPU development would make more sense at this juncture. But I would be interested in hearing CNC's take on this.

 

Perhaps they could run a couple of polls - like if you had to pick one platform - Multi-Processor or 64bit which would you take??? That would be fun.

 

Personally I'd take the Multi-CPU.

 

JM2C

Link to comment
Share on other sites

+10000000 James would love to see that. When I hear that the dual processors do thing at 10 times speed of a regular processor then yeah I think I would lean toward the dual.

 

Crazy Millman

 

I think if you were talking about a CNC machine with a 32 bit to 64 bit then that is a whole different ball game. Been on boths sides of the shoe and the 32 only laster about one year before they all went to 64.

Link to comment
Share on other sites
Guest CNC Apps Guy 1

quote:

...don't have many opportunities to take advantage of multipule CPUs...

[*]Toolpath Crunching

[*]Solids Generation

[*]Toolpath Re-Generation

[*]Solids Re-Generation

[*]X-Form Translate

[*]Surface Creation

[*]Verification

[*]etc...

 

biggrin.gifbiggrin.gifbiggrin.giftongue.giftongue.giftongue.gif

 

James teh thought long and hard about Multi-CPU's

Link to comment
Share on other sites
Guest CNC Apps Guy 1

I strongly believe CNC should consider Multiple CPU implementation as long as the software doesn't take a hit performance-wise if you don't run Multiple CPU's. I believe they would be the first Mid-Range CAD/CAM system to do so.

 

Are your ears perking up yet Pete???

Link to comment
Share on other sites

My experence with 3D Studio MAX (which is multi-CPU capible) dosn't show much of a benifit on two-CPU systems except in certan well-defined tasks that can be split up into sub-tasks. Interactive performance in things like modeling and editing is about the same. Applicaiton-based benchmarks show that this sort of thing is common with multi-CPU configuraitons and interactive applicaitons. Such things scale poorly, if at all.

 

The benifit in a workstation comes from being able to run more than one application (provided the applicaiton in quesiton or it's data set fits into the L2 or L3 cache) at a time at full speed. Even then, contention for resouces like disk and RAM (neither of which is capible of keeping up with even a single CPU) conspire to slow performance. SQL servers and mail servers, with thier processing of thousands of indepndant tasks each involving a small mount of data, have some trouble scaling. Interactive applicaitons almost never see a boost that exceeds the human interface boundry condition. (i.e. the increase is too small to notice)

 

Modeling is a task that can't easily be broken down into a number of smaller sub-tasks that can be processed without haivng any dependancies on the other smaller sub-tasks. i.e. if I apply a fillet on an edge, each part of the task (figure out what faces are affected, calcualte the shape of the fillet surface, trim the faces, stich the filet surface(s) onto the trimmed faces, check the model) depends on the sucusful completion of the perevious part, and needs information from that completion. Likewise with translate x-form, verify, surface generation, and solids re-generation.

 

New ways of working might be possible though. Background re-generation of toolpaths would be one, or re-generaitng more than one toolpath at a time. If you have more than one solid, it should be possible to re-generate each solid independantly, too. Contention for RAM will limit the boost in performance, of course, because the data sets are way too large to fit into the cache. But I'd be surprised if there were more benifits than that to be found in multi-processing. They haven't shown up in any other multi-threaded applicaitons to date, vendor claims notwithstanding.

Link to comment
Share on other sites

Ok Rick this is probaly a stupid question but nothing ventured nothing gained. If you did have a dual processor system would it make more than one session of Master running like one on one processor and one on the other thus keeping the complete processing needed still on one. I can have some cruch time of 20 to 30 minutes on the scallop toolpath and I have a 3.06 P4 with 1 gig of Ram and a 256 mb video card and stil takes that long of parts. Mayeb be way off here cant say I know that much about the nex Xeon processors and mulit proecessor machines but everyone I have talked to that deos raves about the speed.

 

Crazy Millman

 

[ 09-30-2003, 10:19 PM: Message edited by: Millman^Crazy ]

Link to comment
Share on other sites

Yes, you could run two copies of Mill at once, and each of them will perform at *close* to the same speed as if they were running on a single CPU system. I say *close* to the same speed, because there is still a sizeable hit you are gonna take when either session goes to RAM for more data.

 

Xenon's mitigate this somewhat in two ways:

 

- The L2 cache is much larger

- The hyper-threaded 'virtual' CPUs share the same L2 cache, allowing the applicaitons to better exploit what locality they posses (i.e. the code/data they need is all in the same area of RAM).

 

I seem to remember that some Xenon's also have an L3 cache, which helps a lot when dealing with the RAM bottleneck. Remember that Xenon's *also* benifit from the fact that, as high-end workstaitons, they get chipsets, RAM arcitectures, and video sub-systems that are all about performance, not about saving a couple of bucks. System performance is about a lot more than just how many MHz the CPU has. Much like automobile performance is about more than just how big the engine is.

 

If you were to do this, and you had more than one Xenon CPU in your box, you might want to use the task manager to tie one session to one of the physical CPUs and one to the other (i.e. sesson 1 on CPU 0, session 2 on CPU 2) so they have all the cache to themselvs. Without that, the MasterCAM threads will be scheduled for whatever CPU isn't busy at the moment. As MasterCAM was maxing out the last CPU it used, that's almost always gonna be a different CPU, which will result in a cache miss, and the losss of many CPU cycles.

Link to comment
Share on other sites

Correct me if I'm wrong, but isn't the reason for using 64bit cpu over a 32bit cpu is the increased address space that it provides. It does create a more difficult programming environment for developers. Then again, so did 32bit computing when it first came out. The big push for 32bit programs way back was the increased memory addressing - which tranlsates into faster number crunching routines. In essence, isn't that what toolpaths are all about. So, I think a 64bit version of MasterCAM would be very interesting to see. I think the potetial is far greater than a multiprocessor version *and* more stable.

Link to comment
Share on other sites

The decision to develop a 64-bit version wouldn't be a snap one, that's for sure. There are a lot of factors involved that they'd have to consider. Should they have two versions; 32-bit and 64-bit, or should they use polymorphic data types (ex: INT_PTR) instead for an all-around app that makes both 32-bit and 64-bit compiler/linkers happy? Malloc() calls would/should change? The offsets of arrays of certain data types will unexpectedly change. If they use base 16 constants in the code, who wants to be the poor soul to add the extra 8 bits to all of the values. If they are using other people's libraries (of course they are), do those libraries come in the 64-bit variety? The list goes on. Are we having fun yet? Go Java j/k wink.gif

 

In short, the benfits of X being a native Windows app should increase performance over previous versions to satisfy most. And since they're hard at work on X, a P.I.T.A. project like 64-bit migration would not be something I'd want to see them distracted with. Maybe in future versions beyond X, but not now.

Link to comment
Share on other sites

Let me address 'em one-by-one:

 

quote:

Correct me if I'm wrong, but isn't the reason for using 64bit cpu over a 32bit cpu is the increased address space that it provides.

That is exactly it. 64-bits increases addressable memory space quite a bit.

 

quote:

It does create a more difficult programming environment for developers. Then again, so did 32bit computing when it first came out.

Actually, for a clean-sheet-of-paper design, the programming task is simplified. With 16 bit programs using 20-bit addressing, programmers had to keep track of segment:offset registers to find things, and the nature of segment:offset addressing means that problems with overwriting your own code and/or data can be really tough to track down, as there are literally thousands of apperently different addresses that all point to the same location. Moving to 64-bit software allows develpers to address larger amounts of RAM without resorting to segment:offset addressing.

 

quote:

The big push for 32bit programs way back was the increased memory addressing - which tranlsates into faster number crunching routines.

Not necessaraly. More bits means that every data type is larger, the instructions are larger, and that makes the programs larger. Larger programs take longer to load, consume more RAM once they are loaded, and take up more space on the disk. The benifit in performance that accompanied the move from 16-bit to 32-bit came from:

 

- More efficent CPUs that ran at higher clock speeds.

- Fewer cycles spent calculating memory address.

 

The push towards 32-bit computing was driven by the same forces that pushed the industry into making wild-eyed claims about object-orented code: Marketing. If I want you to toss out all your 'old' computers and software to buy my new hotness, I need to convince you that my new hotness is way better than that old busted joint you got. Putting a bunch of spin on 'bitness' helps make that case. When it was all a software vendor could do was to get the same set of features impelmeted in their 32-bit version that they had in the 16-bit version, that marketing was the only thing that kept 'em from going under.

 

quote:

I think the potetial is far greater than a multiprocessor version *and* more stable.

The 'bitness' of the software has little (if anything) to do with it's stability. 8- and 16-bit CPUs running rock-solid stable 8- and 16- bit software are found in many embedded applicaitons (i.e. engine management, ABS, microwave ovens, HVAC control systems) where code size, RAM requirements, and stability are vital. How many bits the code uses is a very poor indicator of the code's quality or performance.

Link to comment
Share on other sites

+1 Rick - excellant reply!

 

Thanks for clearing some of those items up.

 

quote:

Not necessaraly. More bits means that every data type is larger, the instructions are larger, and that makes the programs larger. Larger programs take longer to load, consume more RAM once they are loaded, and take up more space on the disk. The benifit in performance that accompanied the move from 16-bit to 32-bit came from:

 

- More efficent CPUs that ran at higher clock speeds.

- Fewer cycles spent calculating memory address.


Given that, would a 64bit system at a higher clock

rate be more efficent -- even running 32bit apps?

Would it be possible to scale a 32bit app to take advantage of the extra memory width - double up on instructions per clock cycle? I'm not a programmer by any means - I dabble here and there so I think I understand some of this.

 

What do you think?

Link to comment
Share on other sites

quote:

Given that, would a 64bit system at a higher clock

rate be more efficent -- even running 32bit apps?

Would it be possible to scale a 32bit app to take advantage of the extra memory width - double up on instructions per clock cycle? I'm not a programmer by any means - I dabble here and there so I think I understand some of this.


ints (integers) and longs (floating-point values) are 32-bit values on both 32-bit and 64-bit versions of Windows.

Link to comment
Share on other sites

quote:

Given that, would a 64bit system at a higher clock rate be more efficent -- even running 32bit apps?

Efficency and clock rate and not tied together in any meaningful way, despite the hype you hear from the 'clock rate is king' folks at Intel. One of the odder things about many of Intel's CPUs is that the new generation, while almost always running at a higher clock speed than the older version, is not necessaraly any faster when running at the *same* clock speed. i.e. the 200MHz PPro, when running 32-bit code, would outperform a PII running at 200MHz. That is (part) of why the PII was never avalible at any speed you could overclock a PPro to. The PIII at 1.0GHz offered the same performance as the first generation of P4 CPUs, and the follow-on PIIIs at 1.13GHz and above (I beleve it topped out at 1.3GHz, eventually) outperfomed them.

 

Then there is the wonderful world of RISC, where 'bit' players like DEC had screaming fast microprocessors (i.e. DEC Alpha) years ago that are only recently being equaled by Intel stuff running as much as three times 'faster'.

 

Intel's Pentium M is another good example of performace not being equal to clock speed, as the 1.7GHz parts are suppsedly as fast as the current crop of P4s running twice as 'fast'. AMD's CPUs are more of the same, as they clock slower than the Intel CPUs they compete against.

 

quote:

Would it be possible to scale a 32bit app to take advantage of the extra memory width - double up on instructions per clock cycle? I'm not a programmer by any means - I dabble here and there so I think I understand some of this.

No. An instruction is an instruction. Only on RISC or VLIW systems do the length of the instructions have anything to do with the 'bitness' of the CPU. The CISC instrucitons that make up x86 code have various bit lengths, and must be decoded and pre-processed by the CPUs microcode. The need for a seperate and complex decoder is one of the reasons RISC CPUs (like the above-mentioned Alpha, or IBM/Apple's PowerPC) can achieve higher instruciton rates than CISC CPUs at lower clock speeds.

Link to comment
Share on other sites

quote:

ints (integers) and longs (floating-point values) are 32-bit values on both 32-bit and 64-bit versions of Windows.

That is API/language specific. Microsoft will, of course, need to define such values in a consistent way to help programmers port to 64-bit Windows, lest those programmers find themselvs slowly driven mad. What bits the compliler puts on the disk, and how those bits are aligned once they get loaded into RAM are hardware/compiler/compiler's selected optimization scheme specific.

Link to comment
Share on other sites

Actually, it's heavily OS-specific, like I mentioned. For more info, see here. This refers to C/C++ compiler/linkers for Windows and since Mastercam is developed with it, it applies. The 64-bit versions of Windows use the LLP64 data model and in it, all data types are the same size as the current model that Windows uses (ILP32). The only differences between LLP64 and ILP32 is that pointers are now 64-bits and LLP64 has an alternate 64-bit long data type as well as a 32-bit one. The LP64 model is used by most Unix OSs and it contains 64-bit longs only but integers are 32-bit as well. Of course, there's nothing stopping anyone from typedef-ing their way into the 64-bit arena using INT64 and so on; they've been in the platform SDK for some time now. But I'd be scared to see the results of typedef-ing like that on an existing project eek.gif

Link to comment
Share on other sites
Guest CNC Apps Guy 1

This is a great thread!

 

The question still begs to be asked, which would offer better performance for the buck? Multiple CPU's or 64bit?

 

Based on currrent pricing, you could probably build a 4 CPU system for the price of a single 64bit CPU.

 

James teh licking chops and learninig

 

[ 10-03-2003, 11:09 AM: Message edited by: James Meyette ]

Link to comment
Share on other sites

quote:

Based on currrent pricing, you could probably build a 4 CPU system for the price of a single 64bit CPU.


True enough.

 

quote:

The question still begs to be asked, which would offer better performance for the buck? Multiple CPU's or 64bit?


It doesn't solely rely on hardware. The programming factors that go into both could make a huge difference.

 

In the case of porting Mastercam to 64-bit'ness, there are many different implementation routes that they can take. The best one, which would likely involve plenty of cleanup, would take the longest. I haven't developed anything 64-bit (yet) so I can't imagine where to begin.

 

Multiprocessing would also be tricky. Since X will target Windows 2000/XP, that's already a step in the right direction. However, from a programming POV, once again it gets tricky. Which SMP library(s) to use (OpenMP comes to my mind first)? How will calculation tasks be assigned? Do we add our own homebrewed or established SMP algorithms? I should ask this question to a prof of mine who's involved in research in multiprocessing and genetic algorithms.

 

It all comes down to implementation. The fastest hardware architecture can be brought to its knees if the software running on it isn't tight. How about a 64-bit multiprocessor system? That would solve all of our problems wink.gif

 

My brain hurts now wink.gif

Link to comment
Share on other sites
Guest CNC Apps Guy 1

quote:

...How about a 64-bit multiprocessor system? That would solve all of our problems...

Now yer talkin'!!! cheers.gifcheers.gif

 

Wooooooo hooooooooo!!!!! cheers.gifcheers.gif

 

Hop on it PDG!!! j/k

 

As if he's not popping asprins like candy as it is already - then to add this. Sheesh, we're never satisfied are we??? biggrin.gifwink.giftongue.gif

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.

Join us!

eMastercam - your online source for all things Mastercam.

Together, we are the strongest Mastercam community on the web with over 56,000 members, and our online store offers a wide selection of training materials for all applications and skill levels.

Follow us

×
×
  • Create New...