Windows Through .Net

This forum is meant for anything you would like to share with other visitors
Post Reply
TerryB1
Posts: 306
Joined: Wed Jan 03, 2018 11:58 am

Windows Through .Net

Post by TerryB1 »

Taking code into the future via C# or C# derivative should pose little problem.

However, optimum program performance does depend more on the overall coding structure of of the subject program than any non .Net Program.

In the main this is due to the automatic control of memory allocation by the Garbage Collector. Programs, IMO, ideally need to accord with the way the GC works.

Fortunately, in most cases, this ties in with the way we think and have in all probability constructed pre-netted programs in the first place.

In a few cases though, it may be that conflicts with the GC occur and again the adverse effects of most such conflicts will be masked out by the sheer speed of modern electronics. (I am fairly sure that it is the aim of MS to ensure all conflicts are masked out)

Because of the inter-dependence of code during program operation, it is impossible to give guidance on such questions as "What factors should I look at in my program to bring it into the .Net World"? or "Where should I start in the Net World"?

This and similar questions can only be answered on the basis of a common understanding of how .Net works. This I did not find easy to determine just by looking at code.

I am attaching a pdf which I produced in order to clarify things in my own mind. It is what I've interpreted from documentation existing in various forms in the public domain.

I hope it is of use to some here. But I am far from infallible, my interpretations may be wrong (I would be grateful to hear from anyone who thinks this is the case), and others may see things from a different perspective and be able to clarify things in a more comprehensible way.

For those interested I hope it can serve as a common level of understanding.

I did try to make things as compact as possible, but it still runs to 12 pages.

Terry
Attachments
Windows Through Dot Net.pdf
(1.35 MiB) Downloaded 82 times
ic2
Posts: 1858
Joined: Sun Feb 28, 2016 11:30 pm
Location: Holland

Windows Through .Net

Post by ic2 »

Hello Terry,

Oh my, this must have costed you half a week! It's a very interesting, I would even say rather amusing, comparison.

I think however few programmers really care about how the underlying functions, like the GC, work and if they really must because something fails which could be caused by e.g. that GC, then they will dig into it and probably try to find the offending line of code rather than trying to understand why it failed on a lower level.

But if someone want to learn more, your PDF is certainly a special way of gaining some insight.

Dick
User avatar
OhioJoe
Posts: 131
Joined: Wed Nov 22, 2017 12:51 pm
Location: United States

Windows Through .Net

Post by OhioJoe »

Terry, thanks for the great read. Found it fascinating. Helps in confirming that .Net and WPF are the right choices for the future.
It's been awhile since I've read anything about contiguous memory. Takes me back to the days of MSDOS when we needed to always worry about memory management and potential size of arrays, which needed contiguous memory.
Here are some interesting questions.
One of our apps descended from a DOS database program, originally written for computers with 1 megabyte of RAM. Today the app is essentially performing the same task but now in Windows 10 on computers with 8 gigabytes of RAM, or 8,000 times the capacity of the older DOS app! Am I missing something here? What's happened in the last 30 years that requires the consumption of so much RAM? Is it the graphics processing or is it the surrendering of the memory management job to the garbage collector?
My users tell me they're impressed by the speed of our application. (So I must be doing something right.) As I migrate from VO to X#, I'd like to keep that advantage. Do we still need to worry about memory management when memory is so plentiful and cheap? Is there a performance benefit to programming like we're still writing code in C ?
And if so, what are the best practices ?
Not asking for another 12-page treatise but I am interested in reading what you and others have to say.
Joe Curran
Ohio USA
ic2
Posts: 1858
Joined: Sun Feb 28, 2016 11:30 pm
Location: Holland

Windows Through .Net

Post by ic2 »

Hello Joe,
OhioJoe wrote: My users tell me they're impressed by the speed of our application. (So I must be doing something right.) As I migrate from VO to X#, I'd like to keep that advantage. Do we still need to worry about memory management when memory is so plentiful and cheap? Is there a performance benefit to programming like we're still writing code in C ?
...And this is an interesting reply Joe. It seems that speed is not important anymore, nowadays. The IL based .Net programs need some time to precompile before you run them and that always means waiting time. I still have Office 2003 on my system, starting Word shows instantly but Word 2016 takes 1-2 seconds before it shows. Not a big waiting time, but a noticeable difference. Next, VO is much faster than VS in every aspect, restarting, compiling, searching. Full cloud based programs running from a browser are even slower; there's a response time for almost everything you do, clicking control, getting the next screen.

Windows memory is filled with duplicates in all aspects. Check for example the registry ; when you search a certain key you will find sometimes dozens of duplicates in different (sub)keys of content there. Or check the system directories of Windows, same story.

There's no incentive to use less memory (RAM or disk) anymore, so the new I9 based computer with the latest video card and SSD technology I started to use this year is hardly faster than the I7 (with SSD) system I have used for over 6 (!) years.

In short, there are certainly advantages in a transfer VO->.Net. But you will have a hard time keeping your users impressed with the speed of your application.

Dick
TerryB1
Posts: 306
Joined: Wed Jan 03, 2018 11:58 am

Windows Through .Net

Post by TerryB1 »

Dick/Joe

Thank you both for your comments - Dick I fully agree with the points you make.


Joe: You ask "What has happened in the last 30 years......?

Neither the GC nor Graphics Processing change anything from the early DOS days. Things are just done differently.

Electronic processing has become faster (we are nowadays talking of 3 GHz switching rates), but that is not the really significant and game changing factor either.

The most significant factor is the Reliability or Low Failure Rate of modern electronics.

In the old days Failure Rates were high - one or more a day.

Today they are low - MTBF (Mean Time Between Failure) measured in years or perhaps decades.

This means a program can carry out thousands of processes, mathematical, scientific and so on storing each result in memory as it goes. This can now be done reliably, rather than years ago when it would be full of errors.

Just as with simple math a correct result depends on the order on which these processes are re-constituted. In my pdf this is the order in which processed results are visited.

The limit to the number of processes that can be carried out is dictated by program purpose and size.

On screen rendering is then achieved by "Visiting" each of these stored results in correct order and converting each result to an x/y co-ordinate. (Nowadays Vector)

Remember C# (and Visual Studio) cover all types of program you can think of. 3D CAD programs are a case in point where you can imagine a vast number of calculations forming the basis of visual presentation of numerous projections onto a 2-D screen surface.

Of course much of that has limited relevance to a Business program. Though could become relevant to the underlying statistical analysis of such a program.

Back to St. Albans

Suppose as in my pdf you visited St. Albans. You went straight to the Cathedral and ordered a meal. That visit could have been stopped and you could have been forced you to get security clearance first by visiting the police station, providing credentials and so on.


Everything is a trade-off with something else - generally time. As a programmer only you can think it through for your program.

It is visiting in the correct order so as to get things contiguous in memory that is important just as in DOS days of yore.

Let me give you one example of what not to do: you visit St. Albans have lunch and move on. Everything you did there is now in the past. The waiter Thread is out of scope. So is the Chef Tread. Creating and Switching Treads is hugely time consuming.

The time allowed for this is limited by by the O/S by allocation of a timing quantum such that this quantum does not adversely affect any higher priority threads. Usually the UI.

Clearly you want to minimise Thread Switching. So combine them into one thread not two. Or better still do it all on your own Thread then no thread switching is involved.

Now this is all fine in one circumstance: that circumstance being you finish your meal before moving on. The Waiter, Chef thread are short lived and complete before you've finished your meal.

But what happens if you move on before finishing your meal? You now have a dilemma: do you take things with you on a different but longer lasting thread, or do you wait a bit longer (hold/slow the UI thread) whilst putting what's left in the doggy-bag and taking it on to Paris with you?

(Of course you have the option of just leaving what you don't eat on the plate. But in programming terms this is not ideal - it means that you have ordered more than you can eat and the cost re-bounds in your program by causing it to process things unnecessarily.)

These are things only you, as a developer, can decide, in the context of you own program.

IMO for a Business type program, with current hardware switching speed, delays on the UI will be unnoticeable.

I apologise for all the mixed metaphors. But hope it makes some sense.

Dick as you say "In short, there are certainly advantages in a transfer VO->.Net. But you will have a hard time keeping your users impressed with the speed of your application." I have to agree.

But I have to add that equivalent speed is not unattainable, and there are circumstances where UI speed may appear to be faster.

Terry
Post Reply