good developers

Time to develop in #MSDyn365FO… And now what?

Well, I am a programmer with experience in Microsoft Dynamics AX, I have participated in several projects on versions 2009, 2012, R2, R3 … and finally, my company starts working on a Dynamics 365 for Finance and Operations project, and I think … And now what?

First of all, let me clarify that my intention with this article is not to speak in conclusive terms, to impose my point of view or to reject that of others, on the contrary, with this article I would like to stimulate the curiosity of other colleagues, generate debate and put in doubt the mechanisms or strategies that we use daily with a single purpose, to improve as professionals. That is why I invite you to doubt and comment on everything you read from now on to generate a debate that can enrich us both.

I’m very good at programming with X++. Has that changed?

As you know, #MSDyn365FO is a web application (the AOS is no longer a Windows service, but a website that runs on IIS), but that does not mean that we need web programming knowledge to continue working on this product. The general base of the system has not changed, we continue programming mainly with X++ and the basic functionality of the system is the same.

Does it mean that I can continue working as I have done so far? Not at all. The main customization paradigm of the ERP has taken a 180 degrees turn. Until now, we were used to put our code at any point in the system (overlayering), thus modifying its standard behavior. This is over. Currently we have to work with an extension model in which not everything is doable. We must think very well about the modifications to be made, assess their viability, always bearing in mind that the principle of all this is to extend the functionality of the system, not change it. Thanks to this, the core of the application will be “intact” and we can forget forever the dreaded migrations to get carried away by the fantastic One Version: Periodic updates that will allow us to always work with the latest version of the product.

Not everything in this life is X ++

Related to the previous point: “Comfort” is over. It’s no longer worth hiding under that protective layer that was X++, with it everything was possible. We didn’t need anything else other than X++ to be happy. Now it’s necessary (if not mandatory) to be curious. The IDE has been moved to Visual Studio and it allows us a much richer integration with Azure DevOps . We have LCS to manage environments and deployments. We have to be able to see beyond X++, to know Azure, to know the Power Platform (Power Apps, Power Automate, Power BI), to be interested in tools such as RSAT and ATL for automatic testing. In short, open our minds beyond the ERP to be able to find richer and more scalable solutions. It’s clear that at the beginning it can be a bit hard, but at the end, it makes your work much more interesting and fun.

Packages and Models management

One of the first questions I asked myself when I started working with #MSDyn365FO was: How many packages and models do I have to create? How do I have to organize the code developed? The answer I received from Microsoft engineers was the following: Do it as simple as possible.

And so we tried to do from day one, which does not mean that we always did well, on the contrary, we have been learning from the experience and today, we have reached a system in which everyone on the team feels comfortable. We usually make a single package containing all the objects of our customizations, divided into as many models as “modified” modules. That is, if we have to carry out a specific development for the Projects module, we will create our AXZProjects model within our Axazure package. If we need to add certain fields to the CustTable, this modification will go directly to AXZCustomers, and we will use the Axazure model to store all those utils or helpers that we can use in several developments and modules (dimension management, external DLLs …). In this way we leave all “our” code easily located and logically ordered, without adding much complexity when making references between packages.

This organization is valid for us when we work on an implementation project. On the other hand, we have ISVs or internal products with an individual package and model for each one.

I’m not saying that this is the right way to work, but it is the way in which we, in Axazure, feel comfortable today, and I can say also that I am pretty sure that we will change our strategy, just as it has been changing in the last years, because the most important thing, as I have already mentioned, is to ask yourself if you are doing well, and figure out if you can improve it.

Lately I have seen different ways of organizing. A single model, different models per modules, up to a package and model for each GAP / Development. Imagine the number of references, label files, extensions … that will need to be created. Again, I do not say that it is a wrong way of working, but personally, it seems to me that it brings more complexity than value. In the end, when I decide on a specific strategy, I always do the same balance: Simplicity or complexity of my solution versus the value it brings me. If the value is greater than the complexity, we go ahead with it, but the premise is always the same, as simple as possible.

Code Repository and Branch Management

In this section, as in the previous one, my recommendation is that you define a branch strategy as simple as possible, and only add complexity to it when the value it provides exceeds the cost of managing this added complexity.

Currently, and after making several changes during these last 3 years, we work with the following scheme:

  • One DEV branch per developer
  • MAIN branch
  • RELEASE branch

This strategy allows us to always have our code “safe”. We can generate as many changesets in our DEV branch as necessary, which gives us, on one hand, peace of mind, since all our code is in the repository, and by doing as many check-in as we need, it also allows us to change, test, delete, rewrite our code knowing that we can roll-back at any time. (It is a way to simulate the local repositories of GIT, since TFVC does not have this possibility).

On the other hand, we will be able to, once our development is finished, make a single merge to the MAIN branch with all our modifications, leaving the list of changesets of this branch quite clean and controlled.

As I said, so far this solution gives us enough security, keeping the complexity of branch management to the minimum possible.

I doubt that this is the best strategy, in fact, we are quite new to this world, so there is probably a better one, but we are comfortable with it and it gives us enough value without hindering our day to day.

Build environment, AZDO Pipelines and Integration and Continuous Deployment

In this specific section, I am going to allow myself to be a little more critical and cutting, based on the experience accumulated in the projects, and above all, in the audits realized.

First of all, let’s talk about the deployable packages creation in order to deploy them in other environments (UAT, PreProd, Production…)

We all know that from our development environment, we can open Visual Studio and going to Dynamics 365> Deploy> Create Deployment Package …, we can select the packages we want to deploy, generate our deployable package and upload it to LCS to apply it, right?

Well, please, DO NOT DO IT ANYMORE!. It really is not the right way to do it. It is not an opinion, it is a fact. In this way you are taking the (unnecessary) risk of deploying code that you did not want to, or deploy old versions of an object, because… who has not forgotten to do a get latest from time to time? We can generate a package that does not contain those last minute modifications made by our colleague.

You might wonder … so how do we generate the DP? Well, that’s the purpose of the Build environment (for now, but we’ll talk about that other day). The flow that we must follow from the moment we generate our code until it is uploaded to LCS to be deployed is the following:

That is, all development machines are continuously uploading and downloading code in the AZDO repository (Azure DevOps), and ultimately, the Build environment is the responsible of obtaining all the code from the repository, and generating the DP to be able to upload it to LCS. In this way, we will always deploy all the code and only the code that is in the repository. The last changes of our colleague will be there, and our test code or our job to change something that I did wrong won’t be deployed.

We have heard the argument of: “We did not take into account the cost of maintaining a build environment, we cannot suddenly afford this increase.” That is why it is important to make it clear to the client, from the beginning, that this environment is necessary, and explain why, in this way they will see that, again, what it brings us is much higher than its cost.

So, we have already clear that what we have to do when we want create our DP is, go to the Build environment > Get Latest > Build and sync > Create Deployable package > Upload to LCS. Easy peasy, right?

Well, no no no no no, and a thousand times no! The Build environment (as a rule) is an environment that we don’t need to access, we don’t need to map local folders to the code repository, we don’t have to generate these packages manually. Why?. Well, the answer is very simple, because we can automate all this using Azure DevOps pipelines.

When we deploy the Build type environment from LCS, a new Pipeline is automatically generated within Azure DevOps. With this pipeline, a little interest, and 30 minutes or 1 hour for each new project, we can get the whole process of build and deployment 100% automated. Well, at 90% since the deploy to production must still be manually scheduled.

Personally, I think the use of Azure DevOps Pipelines in Dynamics 365 for Finance and Operations is one of the things that has improved our lives the most as Dynamics AX/365FO developers/administrators. The value it brings us is infinite, as well as the time we save in each of our deployments. As I said, with a little desire, learning and imagination you can define a strategy of Integration and Continuous Deployment quite complete. It will allow you to focus more on doing quality work by automating “repetitive” processes such as deploying developments in different environments, and of course, make these processes safer.

And how do we do all that? Well, this is not the initial purpose of this article, but if you are still reading, you have not fallen asleep along the way and you are interested in improving your life and the life of those around you, I advise you to visit Adrià’s blog, specifically the page where he has grouped all the articles he has written about #MSDyn365FO and Azure DevOps. Enjoy it!

Conclusions

In summary, I will leave in writing these points that I consider essential:

  • Don’t be afraid of change
  • Check out what you have around (Azure, Power platform, DevOps …)
  • Pay attention to the experts (of course, I’m talking about Microsoft, not me :))
  • Plan the project correctly from the beginning (necessary environments)
  • Enjoy learning and improving

Finally, I would love to read your opinion about the points we have discussed in this article and to learn from your experience. I’m sure you have better strategies for defining models, branch management …, or some advice to give me about Dynamics or DevOps in general. Can you tell me?

1 comment / Add your comment below

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Uso de cookies

Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra política de cookies, pinche el enlace para mayor información. ACEPTAR

Aviso de cookies