In part 2 (found here), I indicated that the next article would be focused on securing our budding API, but that is going to have to wait. Clearly that was a much younger and more naive version than the man I am today. As it happens, the scaffolded API we built in the last article, while functional, isn’t something that scales well. Based on what I’ve learned, there may also be some functional and security oriented concerns. So, instead of talking about securing the API, I’ll instead be focusing on evolving our API to use some common development patterns.
Service Model
As an output from the last entry, you should have a basic controller that presents a number of web API endpoints. Based on what I’ve come to understand, the first pattern we need to fix is to split our code into three distinct pieces. We need to create a new service definition, which I’ve also seen refered to as a ‘Repository’, then use the constructor to define the service capabilities. Once we’re done, only very basic calls to the service will be left in the controller itself. There are several benefits that this approach offers, so I’m given to understand.
One is flexibility when working with other developers. The more isolated the various pieces are, the easier it is to split out work and avoid conflicts. Naturally this increases the complexity, to a degree, so it’s important to follow common patterns and keep things organized in a consistent fashion. This also offers greater stability in the API elements that the users of your solution will interact with. Constant changes in the code of the API controller itself could result in breaking changes, or otherwise negatively impact the users.
Another solid benefit to consider is dependency injection. If we continued to keep all of our business logic within the controller itself, then any time we needed access to the same information we’d either have to call our own APIs, or we’d have to duplicate code. With things split out, we can instead just call our service, which keeps us from having to duplicate our code. All we have to do is to instantiate an instance of the service in a given class, and we gain access to all the same logic for internal purposes.
Yet another benefit of this approach is that of consolidation and security. While that first part may sound counter-intuitive, hear me out. If we elect to store all of our business logic only in the controller, that means that anything we might want to accomplish would always have to go through the API. If we need to set up some special type of actions intended to support an internal function however, we’d also potentially have to expose that action to the users as well. Sure, we can secure the API with authentication and some authorization rules, but no programmer is perfect. One flaw, or zero-day exploit, and your API could be exploited or abused.
If your needs are simple enough, you could probably get away with the basic approach, but if your use cases change down the road, you’ll just end up having to rewrite your code later. Because of this, it’s best to stick to accepted patterns and practices, even when your needs initially seem simple. If you find think that your needs really aren’t going to evolve, I’d suggest looking into Azure Functions or other similar offerings instead.
Service Definition
First, we need to create an additional folder or so to keep our code organized. I’ve seen this commonly done in one of two approaches, depending on the level of granularity you may wish to achieve. The approach I used was to create a single folder called ‘Services’, or you could use ‘Repositories’ as well, within which I will create all my API service definitions. The more detailed approach I’ve seen calls for an additional sub-folder for each service. Beyond this, there are a variety of levels of organization schemes I’ve seen, so obviously there is a degree of personal preference here. That said, unless you have different classifications of service, or a lot of services, I recommend keeping it simple.
Once you have your folders, you’ll need to create two new class files via a right-click, then selecting Add -> New Item, and finally selecting ‘Class’ and specifying a file name. Since the last article used ‘coreObjectTypes’ as the controller, we’ll stick with that for the service, taking care to specify a capital ‘I’ as the first letter for one of the two files. In my case, I named my new class files ‘ICoreObjectTypesService’ and ‘CoreObjectTypesService’. Using the ‘I’ preface makes it easier to identify Interface classes down the road, should another developer need to look at your code. The Interface file serves as our entry point to our service, while the second file provides a home for our business logic.
ICoreObjectTypesService
For the interface file, we only need a single ‘using’ statement at the top, which should contain a reference to our ‘Models’ class. In my case, this means that I am referencing the ‘EADDWeb.Models’ class, and nothing else. The newly created class will also already have a defined ‘namespace’ based on the folder structure that we created. As you can see in the below code snippet, I elected to keep to my simple structure, so my namespace is simpley ‘EADDWeb.Services’.
using EADDWeb.Models;
namespace EADDWeb.Services {
public interface ICoreObjectTypesService
{
// ....
}
}
The next step is to add in the various actions that we want our API to be able to execute. These lines will typically represent the various Create/Read/Update/Delete (CRUD) operations, with each line representing a single action. Don’t worry if the line in the below snippet doesn’t immediately make sense, as I’ll break down each of the line elements afterwards. At a minimum, we need a line to represent each of the actions defined in our controller. We aren’t limited to only defining the calls for our API.
Task<coreObjectType> GetByIdAsync(byte id);
- Task
- Used to create an action to be executed
- Indicates what kind of object type should be returned when the task executes successfully
- GetByIdAsync
- Think of this like the name of a function that gets called by the controller or another class in your application
- (byte id)
- Indicates the type, and a name, for a parameter/argument that can be supplied as part of the function call
One thing of note that took me a bit to figure out are some of the key differences and similarities between PowerShell and C# functions. The biggest difference between PoSh and C# that I’ve discovered thus far, is in the parameter options themselves. In PowerShell, parameters can be set as required, positional, optional, etc., and specifying an object type is technically also optional. That said, if you have a habbit of writing PowerShell scripts or functions without strongly typed parameters, then I would strongly suggest changing that habbit (yes, even for private functions). So far as I’ve been able to discover, in C# you have to write supplemental code to handle the scenarios you want to support. If you want a particular parameter to be mandatory, then you have to write the code to check for a null value and throw an error when it’s missing. At this point in my journey, I have yet to discover any evidence of keyword modifiers that would accomplish this automatically, though it wouldn’t surprise me if they do exist. The other main difference I’ve noticed is the specification of the output object. While you can indicate such things in PoSh advanced functions, it’s not a typical practice. In C# however, we have to specify the return object type, and then the code must actively return that specific type in order to avoid errors.
CoreObjectTypesService
As mentioned above, this file is the one that will contain our business logic for interacting with the database. This file is where we will define the ‘functions’ that are called by the interface file. The first element we need is to add in our ‘using’ statements for our database context and our models. Since we are using Entity Framework to interact with our data, we will also need ‘using’ statements for ‘Microsoft.EntityFrameworkCore’ and ‘Microsoft.EntityFrameworkCore.ChangeTracking’. The next piece we need after that is to modify our class to initiate our interface and our database context, as shown in the snippet below.
using System;
using System.Collections.Generic;
using System.Collections.Concurrent;
using System.Linq; // This allows us to use Linq queries instead of SQL or more complex code to work with DB data
using System.Threading.Tasks; // This allows us to create background parallel threads, sort of like background jobs
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore; // This simplifies interaction with a DB, and reduces the amount of T-SQL knowledge required
using Microsoft.EntityFrameworkCore.ChangeTracking; // As implied by the last bit, this allows us to keep tabs on what gets changed
using ESAEWeb.Data;
using ESAEWeb.Models;
namespace EADDWeb.Services
{
public class CoreObjectTypesService : ICoreObjectTypesService
{
// Initiate DB context
private ApplicationDbContext db;
public CoreObjectTypesService(ApplicationDbContext db)
{
this.db = db;
}
// ....
}
}
As you can see, initializing our interface is a simple matter of adding a colon, followed by the name of the interface. After that, we declare our variable and associate it to our database context. The next step after this is to create our functions, which need to cover anything we defined in the interface file above. The below snippet shows an example of retrieving a coreObjectType by its database ID.
public Task<coreObjectType> GetByIdAsync(byte Id)
{
return Task.Run(async() =>
{
var c = await db.coreObjectTypes.FindAsync(Id);
return c;
});
}
By the way, in case it isn’t obvious, the above code snippet goes into our file where the four dots are shown in the prior snippet. Now we’ll break down the component elements of each of the lines shown above.
The first line should look pretty familiar, as it’s the exact same line as we saw in our interface file. The third line is the first that we are actually performing any actions, so we’ll start with that one. If you’ve spent any real amount of time writing advanced PowerShell functions, or even if you’ve just spent a lot of time looking at other peoples’ functions, you’ve likely seen the ‘return’ keyword,. In PowerShell, using the keyword isn’t necessary because PowerShell handles that on your behalf on the back end, though long time programmers still use the convention. The next part is the ‘Task.Run’ element, which we use to create a new background thread to execute a desired set of actions. The ‘async’ keyword tells the system to process this task asynchronously instead of performing each task in turn. While the latter approach reduces the potential for multi-access conflicts, it also slows down the application by making everything single-threaded. The remainder of the line, I believe, is short hand for performing an action in the current context. The fifth line is the actual action we are performing, which is a query against the database, so we’ll spend some additional time breaking this one down.
- var c
- The ‘var’ keyword is how we declare a variable, and we are naming our variable simply ‘c’
- await
- This is the keyword that actually enables the asynchronous aspect by telling the system to allow the thread to run in the background while keeping track of it for a response to return
- db
- The ‘db’ is the variable that we used for our database context
- coreObjectTypes
- This element indicates the table we want to access, as defined in our models
- Note: This is why we needed a reference to ‘EADDWeb.Models’ in the using statements – without it, we wouldn’t know what tables or columns we have
- FindAsync(Id)
- This is the asynchronous version of the ‘Find’ method, along with the object we want to find – In this case, the ‘byte Id’ parameter value
We could have skipped creating a variable and just went straight to the return of everything to the right of the ‘=’. The challenge with this is in terms of long-term flexibility. By creating a variable now, if I end up with a new use case down the road that requires me to perform additional processing on the results before returning them, I don’t have to rework anything. For example, if I had a need to filter the results for a particular attribute (column value), I could simply add an additional parameter and then a LINQ expression on the returned object. One could, of course, argue that I should never need that for something as simple as the above example, since it should only ever return a single object with something as exact as the database ID column. What happens however, if I have a use case where I’m using a boolean (true/false) column that allows me to enable or disable a particular object (which I do). In this scenario however, perhaps the requirement is that I can only return the object type if it is marked as ‘enabled’. That being the case, we would want to perform that check prior to returning a value to the calling scope. We could do that in our controller, but the key here is that we want the controller code to remain as simple as possible, so I really want to handle any processing within the service when I can. By putting myself into the habbit of doing things as I have is that it can save me time later, and it represents an accepted pattern of practice.
Putting It Together
Ok, so now we have defined our service interface, as well as our service. That means we’re done right? Well…no, we aren’t quite done yet. To being making use of our new service, we need to perform two additional tasks to get things going. One of these is modifying our application ‘Startup.cs’ to initialize our service and make it available for use. We can accomplish this two lines to our file. The first is a ‘using’ statement near the top of our file to import our new ‘EADDWeb.Services’ namespace. Without this, we won’t be able to initialize the service, since the startup class won’t know it even exists. The second line needs to go into the ‘ConfigureServices’ function, and is what actually does the initializing of the service. As far as where this actually goes, I’ve only found limited information. Based on the various examples I’ve seen, we probably need to put our new line somewhere after initializing our controllers, as shown in the below snippet.
services.AddControllers();
services.AddScoped<ICoreObjectTypesService, CoreObjectTypesService>();
Our last step to start using our service is to modify our controller. As with the Startup class, the first step to accomplish this is to add a ‘using’ statement to pull in our ‘EADDWeb.Services’ namespace. Our next step is to take advantage of dependency injection to initialize an instance of our shiny new coreObjectTypes service, as shown below.
private readonly ICoreObjectTypesService _service;
public coreObjectTypesController(ICoreObjectTypesService service)
{
_service = service;
}
Snippet
public async Task<ActionResult<coreObjectType>> GetTypeById(byte id)
{
coreObjectType c = await _service.GetByIdAsync(id);
if (c == null)
{
return NotFound(); <em>// 404 Resource not found</em>
}
return Ok(c); <em>// 200 OK with object type in body</em>
}
The first element is, of course, the creation of a variable for the service, followed by the initialization of the service. The only real difference is that the variable has to be read only. I assume this is due to the fact that we are working with the controller, but not something I know for sure. What I do know, is that Visual Studio wouldn’t compile until I added the keyword. The second element is the simplified version of the ID query. One key difference on this one is the addition of the ‘async’ keyword again, as well as the type of task. Specifically, we want a task that returns an HTTP status type, along with the object that we requested. After that, we specify the name that will be called as part of the API call, and our lone parameter. Within the block of code, we’ve simplified things substantially. The first piece is the object type and a variable name. We then call our service method and pass in our parameter. The await tells the system to keep the thread open in the background until it completes. Finally, we process the variable to determine if we got back a result or not. I’ve seen variations out there for returning the correct results directly from the service, so that nothing would exist here but a return and the service call, but still working to understand the options and how to use them.
Data Transfer Objects
So, we’re all done right? Well, not so much actually, as there’s another problem that comes up when we start thinking about updating objects in our database. For example, we don’t want someone attempting to change the ID of our records. In my case, I have a number of fields that should only be configurable when initially creating an object type, but should not be changed thereafter. Further, the properties for our objects, which are essentially the data columns, may not be the most user friendly. This is where Data Transfer Objects (DTOs) come into play. DTOs act as an overlay for our model definitions that work as a proxy for our base object type and enables us to determine which properties are available for a particular action, and how they should be referenced. The DTO allows us to include or exclude specific properties, and even to change the names of those properties to something more easily digested. Of course, this also means that we are going to need some additional classes, which means we’ll need at least one additional folder for namespace management and organization. If you’re going to need to support a large number of use cases, meaning you will need multiple DTO classes for each base object, you may wish to organize these into sub-folders by object type as well. In my case, I have only a few use cases to support, and I want to keep my namespace relatively simple for now, so I’m only going to create a single ‘Dto’ folder in the root of my project. In addition to the DTO class definitions, we’re also going to have to make some code modifications, including mapping our new proxy classes back to our base objects.
DTO Classes
The first thing we need to determine is what use cases we want to use DTO proxy classes for. To do this, we should first take a look at our base class properties as defined in our Models folder. The below snippet is an example based on the coreObjectType we’ve been using throughout this walkthrough.
public partial class coreObjectType
{
// ....
[Key]
public byte typeID { get; set; }
[Required]
[StringLength(3)]
public string typeReferenceID { get; set; }
[Required]
[StringLength(40)]
public string typeDescriptor { get; set; }
public bool typeHasADACLs { get; set; }
public bool typeIsEnabled { get; set; }
[StringLength(30)]
public string typeTargetOU { get; set; }
// ....
}
There are more properties, but we’ve got enough to go with for the moment. Now that we have our object model to work off of, we need to consider our use cases, so we’ll go one by one. We won’t cover every use case of course, just enough to get the idea.
Add New
This use case is the easiest, which is why it’s up first. Even if we pre-seed our database, there will inevitably be a need to add new entries to the list. In most cases when working with databases, our primary key is going to be auto-incrementing and set by the database engine itself. This being the case, it’s not a value we want our users to try and supply when creating a new object type definition. Since it’s in the model however, as it must be, that means we likely want to use a DTO to adjust things. We need to create a new class definition in our Dto folder, and let’s call it ‘AddcoreObjectTypeDto.cs’. When setting the name, we want to ensure we clearly identify the action, the represented object, and the fact that it’s a Dto class. This way, when we adjust our code later to use it, it will be easier for anyone looking at the code down the road to figure out what’s going on. The snippet below shows what our new class might look like.
using EADDWeb.Models;
namespace EADDWeb.Dto
{
public class AddcoreObjectTypeDto
{
public string RefID { get; set; }
public string Description { get; set; }
public bool HasACLs { get; set; } = true;
public bool Enabled { get; set; } = true;
public string OU { get; set; }
}
}
As you can see above, we’ve added a reference to our ‘EADDWeb.Models’ namespace, along with our new proxy class definition. For simplicity, our class name should match our filename, and we then want to add all of the properties to be used. The first thing you’ll note is that we didn’t include the typeID property, so that users won’t have to supply this value when adding a new object type. Simple enough, but now you’re probably thinking to yourself ‘Hold on, those aren’t the same properties at all!’, and you’d be right. The other handy thing about using proxy classes like this is that it allows us to change how the properties appear to our users to make them more friendly and obscure our actual column names from prying eyes. A note of caution here however, as this does add a bit more work for us down the road. If we just use the same properties as our base definition, mapping our proxy to our model would be a simple one-liner. Since we’re completely changing the property names instead, we’ll have to map each property individually. In addition, this may also mean that we’ll have to support more use cases that we might otherwise, to keep our consumers from getting confused. How weird would it be if, when they retrieved the object type definition, the property was still ‘typeReferenceID’, but when they added a new one, it was ‘RefId’? Not to mention that you might also need to add more logic to do some translations on the client facing application. Moral of the story here boys and girls, is that you want to maintain a consistent experience across your API actions for property names, and each specialized use case increases the possibility of errors creeping in. If you find yourself needing to simplify just one property, maybe it’s no big deal, but if you find yourself overriding all of them as I did in this example, you may want to revisit your column names instead. In my actual API, I kept to the original column names, so this is just to illustrate my point. As such, I won’t be covering the mapping of each property later on.
Update Existing
The next, and for this article last use case we’ll examine is updating. As mentioned above, I’m not changing up the column names in my actual class, so the example below will reflect this instead of the values shown above. So for this use case we have an existing object type definition that we want to modify. There are some elements however that, at least for my purposes, should never be changed. For example, the typeID and the typeReferenceID fields. I don’t want people to repurpose a given type, particularly not my root types, as this might cause problems down the road if I distribute updates to my API for people to deploy. I’ve already accounted for someone not wanting to use a given type, by providing a means to enable or disable it, but I don’t want them to be deleted by just anyone (we’ll cover role restrictions in another article). The problem here is that my base object definition requires these two fields. Our DTO will allow us to address this to an extent, as shown in the snippet below.
using EADDWeb.Models;
namespace EADDWeb.Dto
{
public class UpdateCoreObjectTypeDto
{
public byte typeID { get; set; }
public string typeDescriptor { get; set; }
public bool typeIsEnabled { get; set; } = true;
public string typeTargetOU { get; set; }
}
}
The first thing that you’ll probably notice here is that I still have the typeID listed. In my case, I can’t really get rid of the ID, as I need it to find the right object type to update, so I’ll need to handle that part in my service logic instead, when I update it to use my DTO class. The next thing you’ll notice is that I’ve only included a small subset of the properties defined in my full object, and it doesn’t include the required ‘typeReferenceID’ value. This won’t be an issue for us when performing our updates, as you’ll see a bit later, as the value will come from the existing object definition. The last deviation from our base model is with the ‘typeIsEnabled’ property, which we’re setting a default value for. This is because we’re making the assumption that, if their updating the object type, it’s because they want to use it. Now, when users go to update an object type definition, they can supply as few as two properties without any issues, and the API call won’t accept any values that I don’t want users to mess with. The reason why we are still including the ‘typeIsEnabled’ value is because I want to provide users the option to modify but still keep the definition disabled. If I didn’t care about that, I could exclude that value as well and just force enable it in my code logic. I actually do have shortcut methods to quickly enable or disable an object type, each with their own DTO class, but I’m trying to opt for better useability. Yes, I know that all these details aren’t strictly required to accomplish the goal of this article series, but it helps illustrate some of the differences between developing a PowerShell function and developing an API. We still need to plan for useability in the PowerShell world, particularly if we are creating a module that will be consumed by others, but now we’re doing something a lot harder by creating a middleware layer that our PowerShell module will have to interact with. One of my design goals is to make my PowerShell code lighter, which means the flexibility has to come from the API side. Just some things to think about.
AutoMapper
From what I’ve discovered while researching the topic of mapping DTOs to base objects, this was at one time a very painful process requiring lots of lines of code to accomplish. Fortunately, an enterprising invidual was nice enough to create an open source library to make this a lot easier, unless you have to do a lot of name re-mapping.
Setting Up
The first step we need to take is to use the NuGet Package Manager to add both the AutoMapper and the AutoMapper.Extensions.Microsoft.DependencyInjection libraries to our project. Next we’ll go back into the Startup.cs class and add a ‘using’ reference to the top of the file for the AutoMapper class only. After that, we’ll make one more modification to actually initialize the library within our application by adding a new entry to our ConfigureServices section as shown in the snippet below.
public void ConfigureServices(IServiceCollection services)
{
// ....
services.AddAutoMapper(typeof(Startup));
// ....
}
I’ll be honest here and tell you that I don’t fully understand the part of the line within the parenthesis, I only know that it has to be there.
The next thing we need is a way to tell the application that our Dto objects are related to our ‘coreObjectType’ from our model. For this, we create a new class file in the root of our project called ‘AutoMapperProfile.cs’. Next we need to add the typical using statements for both our Models and Dtos namespaces, as well as the AutoMapper namespace. Finally, we want to create our AutoMapperProfile class definition, which should inherit from the ‘Profile’ class. Within this, we will identify which Dto objects should be associated with each model class in our project. The snippet below shows an example that maps three Dto classes back to our ‘coreObjectType’ class.
public class AutoMapperProfile : Profile
{
public AutoMapperProfile()
{
CreateMap<coreObjectType, GetObjectTypesDto>();
CreateMap<coreObjectType, AddObjectTypeDto>();
CreateMap<coreObjectType, UpdateObjectTypeDto>();
}
}
Now that we’ve set up the basic bits of our automapper, we need to actually update our code to use the new Dto classes in the appropriate places.
Code Updates – Interface
First let’s modify our ICoreObjectTypesService class. In this file, our changes are simple, as all we need to do is replace calls that list ‘coreObjectType’ with the appropriate Dto for what we are doing. Based on the example above, we need to update our Add and Update actions, as shown in the snippet below.
public interface ICoreObjectTypesService {
Task<coreObjectType> CreateAsync(AddObjectTypeDto c);
Task<coreObjectType> UpdateAsync(UpdateObjectTypeDto c);
Task<IEnumerable<coreObjectType>> GetAllAsync();
Task<coreObjectType> GetByIdAsync(byte id);
Task<coreObjectType> GetByRefIdAsync(string refid);
Task<bool> DisableByIdAsync(byte TypeId);
Task<bool> EnableByIdAsync(byte TypeId);
Task<bool> DeleteById(byte TypeId);
Task<bool> ObjectExists(byte TypeId);
}
We want to make the same changes to our Controller API definitions as those shown above. We need to ensure that those entries always more or less match what we have defined in our interface.
Notice that for our CreateAsync action that we have changed the value within the parenthesis to show our new ‘AddObjectTypeDto’ class, instead of the ‘coreObjectType’ class. You will also notice that I did not change the value in the parethesis at the begining within the Task definition. In this particular instance, I want the requestor to supply only the values I want to allow to add a new Object Type definition, which you may recall should not include the typeID. Even though we are limiting what we want to be provided, we are still returning a full ‘coreObjectType’ as part of the result once it’s been created. The same goes for our UpdateAsync action. We expect the requestor to submit the ‘UpdateObjectTypeDto’ class, but we are then returning the full Object Type with the updated values. Finally, you will note that none of the other actions are being changed. Since we are only taking in a single property for these, we don’t need to specify a Dto. Later, when we add in our web application for interfacing graphically, we may create additional Dto definitions for specific use cases, but for now this should be sufficient.
Code Updates – Service
Just as before, any time we make an update to our interface, we need to make matching updates to our service definition as well. The updates here will be a bit more involved, but it won’t be too bad. We’ll start by adding a using reference for ‘AutoMapper’ at the top of the file. Once that’s done, we will use dependency injection to make the profile mapper accessible, in a similar manner to the DB context. An example is shown in the snippet below.
public class CoreObjectTypesService : ICoreObjectTypesService
{
<em>// Use instance data context field to avoid internal caching</em> private ApplicationDbContext db;
<em>// Inject AutoMapper profile</em>
private readonly IMapper mapper;
public CoreObjectTypesService(ApplicationDbContext db, IMapper mapper)
{
this.db = db;
this.mapper = mapper;
}
// ....
}
As you can see, we are creating a new variable declaration, just as with the DB context. You’ll note that we have made the new variable read only, since we don’t want users to be able to modify our mappings, just consume them. Next we modify the service injection to include our AutoMapper service in addition to the database context.
As a note here, I have noted that a lot of examples on the web seem to incorporate an underscore ( _ ) as a prefix when adding services. At the same time, most of the recent book excerpts seem to skip this convention. I think as long as you are very clear in your variable names, there shouldn’t be a real need to leverage this particular convention if you don’t want to. Even though you may have seen this in some of my previous examples, I’ve actually been going back and switching that out myself, as I prefer not to use the convention. If you know why this convention was started, or the reasons behind it, I’d love to have feedback via the comments, but for now I like things the other way.
The last thing we need to do is to update our actual function definitions to begin using the Dto classes. As with the controller, we first update the type of object we expect to be passed in as an argument. After that, we can do any pre-processing we might wish, just as before. Before we can send the information to the DB however, we have to map the value back to the original class, as shown in the snippet below.
public async Task<coreObjectType> CreateAsync(AddObjectTypeDto c)
{
<em>// add to database with EF Core</em>
c.typeReferenceID = c.typeReferenceID.ToUpper();
coreObjectType objectType = mapper.Map<coreObjectType>(c);
EntityEntry<coreObjectType> added = await db.coreObjectTypes.AddAsync(objectType);
int affected = await db.SaveChangesAsync();
if (affected == 1)
{
return objectType;
}
else
{
return null;
}
}
As you can see above, we really only have one line that is new – ‘coreObjectType objectType = mapper.Map<coreObjectType>(c)’. This line tells our application to use the mapper service to automatically associate the argument value, which is using the ‘AddObjectTypeDto’ class, back to the ‘coreObjectType’ class required to interact with our database.
While this may seem like a lot of effort to go to for simple changes to the values being provided, it does still save us some time. In the event that we later want to rename properties for the user to consume, either via the API, or via our web interface, it will really save us time. Remember that AutoMapperProfile class? That is where we would define the property name mapping details. Without that, we would have to define this mapping, along with all of the various properties, each time we wanted to use the Dtos. By having the framework already in place, we can define it once and be done.
Wrapping Up
So that’s about it for this article. We’ve now updated our application to allow us to modify property values being submitted or returned to our users. We’ve also split our application up to allow us to define services independently of our API. This will really come in handy later, as it’s likely we might need to interact with DB objects from other classes. Remember that anything exposed in our API is visible to everyone able to access it. Sure, we can try to lock it down, but any misses could have unpleasant repercussions. Far better to simply keep private calls private, and limit what goes into the API just to the things that need to be accessed externally.
Until next time…