Quantcast
Channel: Microsoft Dynamics 365 Community
Viewing all 60373 articles
Browse latest View live

How to scale out Dynamics 365 for Finance and Operations on-premises

$
0
0

How to scale out Dynamics 365 for Finance and Operations on-premises

 

In this post I’m going to explain how to scale out Dynamics 365 for Finance and Operations on-premises by adding new VMs to your instance.

 

Overview

The process is quite straight forward and Service Fabric is going to do the remaining jobs once a new node added to Service Fabric Cluster. In this post, I’m going to showcase it by adding a new AOS node to an existing Dynamics 365 for Finance and Operations 7.3 with Platform Update 12 on-premises instance. Basically, the procedure is as follows.

  1. Update Dynamics 365 for Finance and Operations on-premises configurations for new AOS node
  2. Setup new AOS machine for Dynamics 365 for Finance and Operations on-premises
  3. Add new AOS machine as an AOS node in Service Fabric Cluster
  4. Verify new AOS node is functional

Prerequisites

  1. New AOS machine must fulfill the system requirements documented in here
  2. Basic configurations on new AOS machine like join domain, IP assignment, enable File and printer sharing… are done

Procedures

Update Dynamics 365 for Finance and Operations on-premises configurations for new AOS node

  1. Update ConfigTemplate to include new AOS node. For detailed instructions, please refer to documentation in here.
    a. Identify which fault and upgrade domain new AOS node will belong to
    b. Update AOSNodeType section to include new AOS machine
  2. Add A record for new AOS node in DNS Zone for Dynamics 365 for Finance and Operations on-premises. For detailed instructions, please refer to documentation in here.
  3. Run cmdlet Update-D365FOGMSAAccounts to update Grouped Service Accounts. For detailed instructions, please refer to documentation in here.
  4. Grant Modified permission of file share aos-storage to new AOS machine. For detailed instructions, please refer to documentation in here.

Setup new AOS machine for Dynamics 365 for Finance and Operations on-premises

  1. Install prerequisites. For detailed instructions, please refer to documentation in here

a. Integration Services
b. SQL Client Connectivity SDK

  • Add gMSA svc-AXSF$ and domain user AxServiceUser to local administrators group

  • Setup VM. For detailed instructions, please refer to documentation in here.

a. Copy D365FFO-LBD folder from an existing AOS machine, then run below steps in powershell as an administrator from D365FFO-LBD folder

NOTE: D365FFO-LBD folder is generated by cmdlet Export-Scripts.ps1 when deploy Dynamics 365 for Finance and Operations on-premises per document in here

b. Run Configure-PreReqs.ps1 to install pre-req softwares on new AOS machine
c. Run below cmdlets to complete pre-reqs on new AOS machine
.\Add-GMSAOnVM.ps1
.\Import-PfxFiles.ps1
.\Set-CertificateAcls.ps1

  • Run Test-D365FOConfiguration.ps1 to verify all setup is done correctly on new AOS machine
  • Install ADFS certificate and SQL Server certificate

a. Install ADFS SSL certificate to Trusted Root Certification Authorities of Local Machine store
b. Install SQL Server (the .cer file) in Trusted Root Certification Authorities of Local Machine store

Add new AOS machine as an AOS node in Service Fabric Cluster

  1. The full instructions about how to add or remove a node in a existing Service Fabric Cluster could be found in here. Below steps are performed in new AOS machine.
  2. Download, unblock and unzip the same version of standalone package for Service Fabric for Window Server for existing Server Fabric Cluster
  3. Run Powershell with elevated privileges, and navigate to the location of the unzipped package in above step
  4. Run below cmdlet to add new AOS machine as an AOS node in Service Fabric cluster


.\AddNode.ps1 -NodeName <AOSNodeName> -NodeType AOSNodeType -NodeIPAddressorFQDN <NewNodeFQDNorIP> -ExistingClientConnectionEndpoint <ExistingNodeFQDNorIP>:19000 -UpgradeDomain <UpgradeDomain> -FaultDomain <FaultDomain> -AcceptEULA -X509Credential -ServerCertThumbprint <ServiceFabricServerSSLThumbprint> -StoreLocation LocalMachine -StoreName My -FindValueThumbprint <ServiceFabricClientThumbprint>

Note the following elements in above cmdlet

AOSNodeName– Node name of a Service Fabric Cluster. Refer to configuration file or Service Fabric Cluster explorer to see how existing AOS nodes named
AOSNodeType– the node type of new node is
NewNodeFQDNorIP– FQDN or IP of new node
ExistingNodeFQDNorIP– FQDN or IP of an existing node
UpgradeDomain– upgrade domain specified in ConfigTemplate for new node
FaultDomain– fault domain specified in ConfigTemplate for new node
ServiceFabricServerSSLThumbprint– thumbprint of Service Fabric server certificate, star.d365ffo.onprem.contoso.com
ServiceFabricClientThumbprint– thumbprint of Service Fabric client certificate, client.d365ffo.onprem.contoso.com
Local Machine, My– where certificates installed

NOTE: Internet access is required as AddNode.ps1 script will download Service Fabric runtime package

  • Once new node added, set anti-virus exclusions to exclude Service Fabric directories and processes
  • Get and edit existing Service Fabric Configuration once new node synced

a. Run below cmdlet to connect to Service Fabriccluster

$ClusterName= "<ExistingNodeFQDNorIP>:19000"
$certCN ="<ServiceFabricServerCertificateCommonName>"
Connect-serviceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 -X509Credential -ServerCommonName $certCN -FindType FindBySubjectName -FindValue $certCN -StoreLocation LocalMachine -StoreName My

Note the following element in above cmdlet

ExistingNodeFQDNorIP– FQDN or IP of an existing node
ServiceFabricServerCertificateCommonName– common name of Service Fabric Server certificate, *.d365ffo.onprem.contoso.com
Local Machine, My– where certificate installed

b. Run cmdlet Get-ServiceFabricClusterConfiguration and save output as a JSON file
c. Update ClusterConfigurationVersion with new version number in JSON file
d. Remove WindowsIdentities section from JSON file

e. Remove EnableTelemetry

f. Remove FabricClusterAutoupgradeEnabled

  • Start Service Fabric configuration upgrade

a. Run below cmdlet to start Service Fabric configuration upgrade

Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>;

b. Run below cmdlet to monitor upgrade progress

Get-ServiceFabricClusterUpgrade

Verify new AOS is functional

  1. Confirm new AOS machine is added as AOS node successfully
BeforeAfter


  • Validate new AOS is functional as expected


Testing #MSDyn365FO Odata with Postman

$
0
0

Last year I posted on using Postman. Things have changed since then and I need to update.

http://dynamicsnavax.blogspot.com/2017/05/dynamics-365-for-operation-web-service.html

There is a good article that Microsoft has written which I followed without any issues.

https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/data-entities/third-party-service-test#prerequisites

Below are some screenshot incase you are a visual person like me.

In the environment set ups. It should looks something like this.

image

When you run it, you should get a response.

image

Once you got the token. You are good to go with your messages.

Below is the same example from the blog post.

image


You run into any problems, click on the console icon at the bottom. Should give you a bit more information.

image

If you get a 401 error. It is usually a typo. Make sure you got the spaces and backslashes correct. One simple character can drive you crazy.

Announcing Microsoft Dynamics 365 availability in Compliance Manager

$
0
0

More and more organizations continue to increase their adoption and use of Microsoft Dynamics 365 and chose Microsoft as their partner to help them digitally transform their organizations with modern, unified, intelligent and adaptable business applications. To help our customers meet their security, privacy, and compliance needs when using a Microsoft cloud service such as Dynamics 365, Azure, or Office 365, we are happy to announce that Compliance Manager is now available for Dynamics 365 and is available to licensed Dynamics 365 users at no additional charge.


Microsoft is committed to helping our customers with their compliance journey, and with the May 25, 2018 effective date for the General Data Protection Regulation, or GDPR, Dynamics 365 customers are now empowered to manage their compliance activities from one place with three key capabilities: helping you perform on-going risk assessments, providing you actionable insights from a certification/regulation view, and simplifying your journey to manage compliance activities with the capability to create multiple assessments not only for GDPR, but for ISO 27001, ISO 27018, NIST 800-53, and HIPAA.


It enables your organization to perform on-going risk assessments for what is identified as Microsoft’s responsibilities by evaluating detailed implementation and test details of our internal controls. We are committed to be transparent about how we process and protect your data so that you can trust Microsoft and leverage the technology we provide.


We also provide you the information and tools to conduct self-assessment for your responsibilities of meeting regulatory requirements and understand what controls, from a customer point of view, they can review and understand what their responsibilities can be to meet their GDPR requirements.
With on-premises deployments, customers are 100 percent responsible for their data protection and compliance needs. As customers move their data to a Microsoft Cloud service, such as Dynamics 365, Azure or Office 365, Microsoft partners with you to help you achieve and maintain compliance with those industry regulations that your business needs under the shared responsibility model. You can read more about the model in Shared Responsibilities for Cloud Computing.


We are excited to announce the availability of 63 customer managed controls and 48 Microsoft managed controls for Dynamics with more planned to be added in the coming months.


The Customer Managed Controls section provides you with recommended actions that your organization can take along with tools to facilitate data protection and compliance management. Each family of controls include control IDs, titles and descriptions, along with a list of related controls from other standards and regulations, and the Compliance Score for the control. Each control also includes workflow, tracking, and evidence gathering features that enable you to:

  • Assign implementation or verification tasks to individuals within your organization;
  • Enter implementation details, test plan information, test details, implementation and test dates, and test results;
  • Upload evidence to verify compliance activities and control implementations.

You can get started today by going to https://aka.ms/compliancemanager and click on “Launch Compliance Manager.” Next, select how you want to log in.

Once you have logged in, take the tour to learn about the different features of Compliance Manager:

Compliance Manager Tour

Then to start your Dynamics 365 GDPR compliance journey, click “Add Assessment”:

Then click to “Create a new Group”, provide a name, select Dynamics as the product and select GDPR as the Assessment and you can begin to review the customer managed controls and the steps you can take to meet your GDPR obligations.

Click on the name you provided and see the details for this assessment and have access to the different controls for GDPR:

By selecting one of the Customer Managed Controls you will see the Control ID and Description, Related Controls / Articles across other industry regulations to help you streamline you processes, Implementation Details including who has been assigned to the control, Status, dates, Test date and Test result, Customer Actions, Implementation Details, as well as Test Plan & Management Response details:

And remember to check back often for the addition of more Microsoft and customer managed controls for Dynamics 365 and the addition of both Microsoft and customer managed controls for ISO 27001 and 27018 as well as other compliance offerings as we continue to provide you the information and details to help you with your GDPR as well as your broader compliance journey.

Click here to go to Compliance Manager, take the tour, create a Dynamics 365 Assessment for GDPR, and explore the Microsoft and customer managed controls to help you manage your compliance journey.

DYN365FO Form Drill Through Opening Wrong Form

$
0
0
Hello AX World, Recently I have run into an issue. I am in the list page and trying to click through some ID to get to the form I expect and I get a completely different form opened. Strange… Hey, if you...(read more)

Change maintenance mode for license configuration in D365

$
0
0
In this blog we will discuss, how to change maintenance mode for license configuration in Dynamics365. When we need to enable / disable configuration key we can do it by going at navigation System administration...(read more)

how to use security diagnostics for task recording in dynamics 365?

$
0
0
In AX 2012 we have Security Development Tool Link Unfortunately, it’s not available in the current version and replaced by another useful tool called Security diagnostics tool. In the previous Post , we...(read more)

Relationship data analysis with XrmToolBox

$
0
0
Background Relationships in Microsoft Dynamics 365 Customer Engagement (CRM) and PowerApps Model Driven Apps can be decorated with a number of characteristics that define their behavior. These characteristics...(read more)

Product lifecycle in Dynamics 365 for Finance and Operations

$
0
0
If it has escaped your attention, Microsoft has now added a new field called Product lifecycle state to products in Dynamics 365 for Finance and Operations (D365FO). With this field, we are now able...(read more)

Complex SLA 7 - Testing Complex Custom SLA Instances in Dynamics 365

$
0
0

↩ Part 6 - Configure SLA Instance Business Logic in Dynamics 365

In this article, the final article, in this series I will describe how I tested the SLA and demonstrate the results of a basic test.

The results show that although each of the SLA KPIs are started and stopped when required, the Applicable From and Failure Time date and times for each of the SLA KPIs are all accurate because of the fact that these are based on separate SLA Instances with their own Applicable From date and time values.

Test the SLA

In order to execute a simple and repeatable basic test of the SLA described in this series of articles I created a Workflow Process that to perform each of the record steps such as populating the date and time fields used by the SLA, creating the SLA KPI specific SLA Instances and waiting for a pre-set period of time between steps.

image

The following image shows the First Response By SLA Instance displayed in the SLA Instances subgrid on the Opportunity a short time after the Opportunity has been created and before the Quote Sent date and time has been populated to indicate the Quote has been sent to the Customer. The second image shows the form for the SLA Instance.

At this time this is the only applicable SLA KPI for the Opportunity.

image

image

The following image shows the Success of the First Response By SLA KPI as a result of the Quote being sent before the Failure Time for the First Response By SLA KPI has been reached. The second image shows the form for the SLA Instance.

At this time there are no applicable SLA KPIs for the Opportunity.

image

image

The following image shows the First Response After Quote Accepted By SLA Instance and the Order Sent By SLA Instance in the SLA Instances subgrid on the Opportunity a short time after the Quote Accepted date and time has been populated to indicate that the Quote has been Accepted by the Customer. The second and third images show the forms for the SLA Instances.

Here we see that the Failure Times for the First Response After Accepted By SLA KPI and the Order Sent By SLA KPI are respectively correctly 1 hour and 2 hours after the Quote Accepted date and time.

image

image

image

The following image shows the Success of the First Response After Quote Accepted By SLA KPI as a result of the Customer being contacted after the Quote has been Accepted before the Failure Time. The second and third images show the forms for the SLA Instances.

This also shows the Send Order By SLA KPI is still active.

image

image

image

The following image shows the Success of the Order Sent By KPI as a result of the Order being Sent before the Failure Time, albeit incorrect Failure Time, for the Order Sent By SLA KPI has been reached.

image

image

The following image shows the related SLA Instances for each of these SLA KPIs.

image

The approach demonstrated in this series shows how multiple SLA KPIs for the same SLA needing to be Applicable From different dates and times can be implemented.

Finally, there are two limitations of this approach. Firstly, the SLA KPI Timers were not directly displayed on the Opportunity Main Form and secondly the SLA KPI Timers cannot be paused based on the Status/Status Reason of the Opportunity. The second issue can be solved by ensuring there are Status/Status Reasons on the SLA Instance that match those on the Opportunity and that there is a process on the Opportunity to ensure that the Status/Status Reason values of the related SLA Instances always match those of the Opportunity.

D365Tour Press Review – June 2018

$
0
0
D365Tour Press review - June 2018 Microsoft en France / Articles en Français Cloud PME : pourquoi passer...(read more)

Sales Order Picking in Dynamics 365 Finance & Operations

$
0
0
Sales Order Picking in Dynamics 365 Finance & Operations The Warehousing App in Dynamics 365 Finance & Operations The sales order picking is a generic business process that represents picking inventory to fulfill a sales order, which is a ...read more

How IT can Become a Revenue Driver

$
0
0
The shift from IT being a cost center to becoming an increasingly innovative and value creating role is happening across industries today. Leaders are now consistently turning to the IT team for ideas...(read more)

Dynamics 365 Business Central for Field Service

$
0
0

How the all-in-one business management solution is being used for improving field service operations.

Field service management (FSM) refers to the management of a company’s resources employed at or en route to the property of clients, rather than on company property. FSM most commonly relates to companies that need to manage installation, service or repairs of systems or equipment. If you are working in the field service realm, you understand the challenges that come along with the job – but imagine being able to use work orders to capture and describe the needed work while also capturing the required resources and skills to complete that work…would you jump on the opportunity to use a product that promises such benefits?

Enter Microsoft Dynamics 365 Business Central, a cloud-based all-in-one business management solution that is changing the face of field service work. If you have spent any time around the DMS websites and fun-filled blogs, you’ve undoubtedly seen the repeated references to Dynamics 365 Business Central and how the solution works in a myriad of applications; now, we’re going to shine a white-hot spotlight on how it is up to all sorts of good for field service workers.

Painless Resource Scheduling, Optimizing Work Order Requests and More

In the field service arena, it’s all about work orders. The process of resource scheduling with regard to field service enables the configuration of constraints and perimeters to improve business needs; put simply, it optimizes work order requests by work order availability, required skills, work order duration, priority, promised time window and so much more. With Microsoft Dynamics 365, resource scheduling – and a lot more – is made absolutely painless in a myriad of ways.

Say Goodbye to Work Order Confusion

Through Business Central, work orders are used to track equipment, repairs, inspections and preventative maintenance, with a schedule board that displays the resources and the associated requirements. Included in this are requirements from work orders and opportunities, given the fact that the same pool of resources may be performing the repair work and that there may be opportunities for new service.

As resources are scheduled, booking is captured for a specific time and place, which can be performed manually or by using the “smart filtering” feature via the Schedule Assistant. According to Microsoft reps, customers can best schedule their resources using Resource Scheduling Optimization as their challenges grow more complex.

Let’s take a quick look at that primary feature now…

Say Hello to Resource Scheduling Optimization

As one of Business Central’s more premium features for field service workers, Resource Scheduling Optimization enables the configuration of constraints and parameters to meet specific business needs. It works by optimizing work order (there’s that phrase again) requests by:

  • Resource Availability
  • Required Skills
  • Work Order Duration
  • Priority
  • Promised Time Windows
  • More

Here’s a good example of how Resource Scheduling Optimization makes a difference in the field service sector: The application can schedule events based on matching skills and staying within scheduled working hours; in this instance, with the Resource Scheduling Optimization applied, the schedule has been optimized with the defined constraints and objectives, while work orders and Opportunities have been scheduled across the available resources based on skills and within working hours.

The bottom line is that with Resource Scheduling Optimization and Business Central working in tandem in field service, the ability to intelligently schedule and reschedule in minutes enables resources to better meet customer commitments and organizations to make the best use of available capacity.

IAMCP Announces Finalists in the 2018 Global Partner-to-Partner Awards Program

$
0
0
International Association of Microsoft Channel Partners Congratulates All Finalists in the 2018 IAMCP Member Awards Program. Winners and Runners-Up to be announced at Microsoft Inspire in Las Vegas Tuesday...(read more)

IAMCP Announces Finalists in the 2018 Global Partner-to-Partner Awards Program

$
0
0
International Association of Microsoft Channel Partners Congratulates All Finalists in the 2018 IAMCP Member Awards Program. Winners and Runners-Up to be announced at Microsoft Inspire in Las Vegas Tuesday...(read more)

Workflow Approval Request Error After Upgrade

D365 Application Insights – JavaScript

$
0
0
If you’ve implemented the sample integration for Application Insights you know that a small JavaScript file was provided that was to be included on any forms where you wanted logging. This little script downloads a big script and then by having this, all sorts of useful data is to be magically logged. On the previously linked page it shows the user name, a page/entity name of some sort, and a page view duration measuring in the seconds. I’m not quite sure where that data is coming from but I ended up with data that looked more like this.

No way to identify which form the data came from, no way to identify which user (user_id doesn’t tie back to anything), and the page view durations are all in the milliseconds.

The browser timing (page load times) metrics weren’t any better.


495 millisecond page loads? That never happened… not ever.

It’s not all bad. It does capture exceptions on the form (not that you can trace it back to where), it provides some interesting data on browser use, client location, etc.

It also doesn’t tell you that there is an existing SDK that you can use to log your own telemetry. This is where what I did comes in. I’ve packaged everything together, but ultimately you still end up using the same small initialization script (that defaults some options to off) which pulls down the larger script but then I’ve added a layer on top that adds a few features and fixes some of the previously mentioned problems.

I’ve built onto these telemetry types:
  • Trace
  • Event
  • Metric
    • Page load duration (custom metric implementation)
    • Method execution duration (custom metric implementation)
  • Dependency
  • Exception
  • PageView
For examples of setting up and using, see the GitHub site’s wiki page.
By default with each request it’s going to log the following D365/CRM specific data as custom dimensions:
  • User Id
  • Entity Name
  • Entity Id
  • Form type
  • Form name
  • Organization name
  • Organization version
  • Source (JavaScript)
  • + any other custom dimensions of your choosing
This is in addition to the normal items that are logged like the date/time of the event, browser info, etc. It’s using the ids rather than friendly names to reduce overhead a bit. This could easily be updated to pull the friendly name or else you can cross reference it back to D365/CRM later. Either way, you’ll still have more contextual information about the data you’ve logged.

When setting up on a form you’ll pass a JSON configuration string as a parameter to the setup function. It controls what types of items will be logged (example turn off tracing in production), percentages of event types to log (so maybe you only 25% of total page view times), & debug mode. Exceptions at the form level should still be logged (regardless of you catching them or not), page view times, and page load times will be logged without any interaction so long as they aren’t disabled. All the other items rely on the developer to call the specific logging function when needed.

My implementation of a page view should be a bit more accurate as it logging the total time spent on the page just as the user is closing the form. And how does this happen? As I learned, all modern browsers have the Beacon API which includes the sendBeacon method whos exact purpose is to do things like send telemetry just as the user is leaving the page. Of course Internet Explorer 11 doesn’t have this and it’s still a supported browser, in which can I had to resort to a bit of a hack by including a very short synchronous request in the page’s beforeunload event. Not really perceptible but still ugly. 

Now my page view time is showing around 41 seconds, that sounds a little more reasonable. Also shown here is the current user id as user_AuthenticatedId and also the default D365/CRM specific custom dimensions.


Page load time isn’t really anything special (other than being more accurate) as it’s just using the browser’s performance APIs to read what the browser records as opposed to trying to start a timer yourself which would never really be close to accurate anyway. I’m logging them a custom metric.

image

7.4 seconds sounds a little more accurate.

Logging traces, metrics, events, caught exceptions, dependencies, and method execution times are all pretty straight forward. Logging XMLHTTPRequest dependency times an be done with 1 line of code. Pass the request and whatever name you want to the tracking function and then you won’t need to worry about wrapping a timer around anything or handling the end times in both successful and error events. Examples for everything is on the wiki page.

All the code is on GitHub:
https://github.com/jlattimer/D365AppInsights.Js

I’ve packaged the source files for distribution so that you can either use it as is or make your own modifications. If you’re going to being doing the later, read the notes here and/or look at the gulp file in the source code to see how the files are being put together.

NuGet JavaScript/TypeScript source package:
https://www.nuget.org/packages/JLattimer.D365AppInsights.Js

Manufacturers: Not Tracking Direct Labor Costs?

$
0
0
How Do You Even Know What to Charge Clients, or What Kind of Profits You’re Seeing? A major element in the manufacturing environment too often overlooked is the calculation of direct labor costs...(read more)

Internet of Things (IoT) Device Considerations for Dynamics 365 and Connected Field Service

$
0
0

Microsoft has enhanced Dynamics 365 for Field Service with the Connected Field Service Add-on, which can enable your organization to take advantage of proactive and predictive service scenarios. By leveraging an investment in IoT capable devices your organization can take more proactive steps to detect, diagnose, and correct problems before they arise.

The power of the Connected Field Service Add-on is its simplicity and how it can process messages from IoT devices using the Dynamics 365 workflow engine. Registered IoT devices can send messages to Dynamics 365 for Field Service, where workflows can determine how to respond to those messages by sending emails or creating new records such as work orders or even using IoT Device Commands to reboot a device.

Powerful IoT Capabilities

Connected Field Service is very powerful in allowing IoT scenarios that take full advantage of Microsoft Dynamics 365 capabilities while also being fully customizable and extensible. Microsoft has enabled any entity to be IoT-enabled for straightforward IoT integration by using the register custom entity action. The add-on also brings several new entities and custom actions specific to IoT. New custom actions allow device registration scenarios, the ability to parse incoming messages for String, Number, or Boolean data types, and capabilities to handle duplicate messages that may be received from an IoT device. You can also now use IoT device data in custom dashboards to display aggregates, determine trends, or other metrics.

New Service Capabilities

Your organization can now take advantage of new scenarios that allow the remote monitoring of customer assets using IoT devices with sensors. You can be notified of a potential trouble with your equipment and take corrective action by sending the IoT device a command. You can use predictive machine learning to measure sensors in the field, and know when a device may have the potential to fail soon. You also have access to details about deployed equipment and have parts and supplies on hand for the service technician to complete the call in one trip.

The World of Sensors for IoT Devices

There are many analog and digital sensor options available that you can use with your IoT devices.

Example list of sensors you can integrate with your IoT device
AccelerometerForce Resistive ResistorLiquid LevelPulse
Air QualityGasLocation (GPS)Radiation / Geiger Counter
AltitudeGestureMagnetic Contact SwitchRGB color
AmmoniaGyroMagnetometer / Hall EffectRibbon Touch Sensor
BarcodeHydrogenMagstripeRotary Encoder
Barometric PressureHumidityMethane/Propane/Iso-ButaneTemperature
ButtonInfrared SensorMicrophoneTilt
Capacitive TouchIR Beam BreakMotionUltrasonic Rangefinder
Carbon MonoxideIR DistanceMuscle SensorUltraviolet
Circular Touch PotentiometerJoystickPhoto CaptureVibration
Coin AcceptorKeypadPiezoWind Speed
EthanolLaser Beam BreakPotentiometer Knob
FingerprintLight SensorPressure
Flex SensorLiquid FlowProximity

Getting started with the Azure IoT DevKit

The AZ3166 board contains a EMW3166 WiFi module with 256K SRAM plus OLED display, buttons, LEDs, headphone jack, microphone, sensors like temperature, humidity, pressure, motion, and more… all for a very affordable price.

  • Order MXChip IoT DevKit here.
  • Get the software here.
  • Azure IoTHub examples for Raspberry Pi, Intel Edison, Adafruit Feather ESP8266, Adafruit Feather M0, and Sparkfun ESP8266 here.

Does Your IoT Business Case Make Sense?

Consider an IoT business case to track the environmental conditions for eggs from a farm to a grocery store. We might be considering sensors for temperature, humidity, and acceleration in case of drops. The use case would track eggs per flat, for which there would be many per pallet. Knowing a dozen eggs in our local grocery are often on sale for less than one US dollar, we may ask how could a client justify the cost of the IoT device for tracking these eggs? While IoT is a hot buzzword, there are many things to consider in your business case for using IoT devices.

Does your IoT Device Cost Exceed the Product, Service, or Reputation Costs?

Not all business cases are going to equate to a product cost. It may represent the value of service to that customer because IoT is going to give you a predictive edge to “Wow” the customer for a lifetime. Maybe your business case will establish your reputation as someone who created a game changer in the marketplace?

How will your device connect to Azure IoT Hub?

Your device will need some pathway to the Internet. Most examples provided will show the device itself having Internet connectivity either directly with an Ethernet port, or wirelessly with built in Wi-Fi. There is also Azure IoT Edge, which can act as a transparent proxy for other devices. It should also be possible to support other devices with Bluetooth or Radio using the building blocks Microsoft has provided. Each device will need to be registered separately with their own connection string, and any computer proxy supporting multiple devices will require sending the correct connection string for the correct device so Azure IoT Hub tracks appropriately.

Will your IoT device take a beating?

Will your device remain in conditions where it will take a physical beating? Will your device be exposed to extreme high or low temperatures? Will your device be exposed to conductive liquids? Should you consider rugged or replaceable sensors which can take a beating? Have you considered what type of enclosure and dimensions of that enclosure will be required?

How will your IoT device be powered?

Most devices are powered via micro USB or direct current jack on the board. For stationary devices, it may be possible to power those with normal wall adapters. In mobile scenarios, the device will need some type of external battery pack with removable batteries or rechargeability. In some mobile scenarios, it may make sense to power the device with a Lithium Ion battery pack which is rechargeable via solar.

Your costs don’t end with the device.

Devices will need to be repaired or replaced, will require new batteries and new sensors, or may need to be retired. Infrastructure may also be required for your devices to work. We have seen cases where there are dead spots in Wi-Fi even within the same building. Make sure to include the device replacement, device maintenance costs, and infrastructure in your business case.

Training costs

Don’t forget the people costs! Consider if your device requires human interaction or needs to be checked up on from time to time.

It’s a Long Way from Maker to Production Device.

Most of the development boards you will find are designed for the low quantity maker community who are building small quantities or prototypes. There are many printed circuit board layout programs and manufacturers who can take your printed circuit board designs and produce you a finished board in small or large quantities. Having production printed circuit boards is only half the issue, you are going to need to source enclosures, sensors, batteries, solar, connectors, and other components that make this a finished product. Having all the components, now requires assembly, and quality control. If you are considering a production roll out, you may want to consider a turn key IoT design firm who can produce you a custom device.

Want to learn more about IoT and Dynamics 365 for Field Service? Join PowerObjects, this year’s Platinum Sponsor, at the Microsoft Business Applications Summit on July 22-24 in Seattle. Register with code HCL100dc to receive a $100 discount.

Happy Dynamics 365’ing!

Historical GP Reports and Power BI

$
0
0

It’s pretty tough to get GP’s historical reports into Power BI. If, for example, you want to analyze receivables as of the end of last month using Power BI, it can be a real pain to get at the data.

Reports like the Payables or Receivables Historical Aged Trial Balance for example or the Historical Inventory Trial Balance use temporary tables in GP and are really hard to reproduce outside the application. The SSRS versions of these reports included with Dynamics GP, use stored procedures, but Power BI doesn’t support stored procedures. There are workarounds, but with the complexity of running the procedures with the right parameters and the challenge of making that data available for Power BI, it’s far from simple.

The newest version of Historical Excel Reporting for GP has a streamlined Raw Data tab making it incredibly easy to bring historical data into Power BI.

With one of the Historical Excel Reports, simply open, refresh, and save the Excel report so it has the latest data.  Then, in Power BI select that Excel document as a source and pick the Raw Data tab.

Now the data is available to build visualizations in Power BI. Next time simply refresh and save the Excel sheet and refresh Power BI.

For more info on Historical Excel Reporting, visit the Historical Excel Reporting for GP page.

The post Historical GP Reports and Power BI appeared first on DynamicAccounting.net.

Viewing all 60373 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>