- Navigating the Information Value Chain
- Navigating the Information Value Chain
- MRNA – DNA Cancer treatment – Personalized Information Governance
- Tucker Carlson Interviews Dr. Peter McCullough on Worldwide Conspiracy to Suppress Existing Drugs that Effectively Treat COVID Patients in Favor of “Vaccines”
- Truth To Power – Brandon Tatum on race relations
Claudius Pereira on Bright Insight – Lost Ci… John Indelicato on DNA and the concept of MDM(Mas… Data Scoring when yo… on Creating a Metadata Mart via T… Data Profiling and D… on Creating a Metadata Mart via T… James Searle on Creating a Metadata Mart via T…
- May 2021
- April 2021
- March 2021
- December 2020
- November 2020
- October 2019
- September 2018
- June 2018
- May 2018
- March 2018
- May 2017
- June 2016
- January 2016
- April 2015
- January 2015
- December 2014
- November 2014
- April 2014
- January 2014
- December 2013
- September 2013
- November 2012
- October 2012
- August 2012
- July 2012
- May 2012
- April 2012
- February 2012
- December 2011
- November 2011
- October 2011
- May 2011
- April 2011
- December 2010
- October 2010
- January 2010
- October 2009
- January 2009
- May 2008
- Big Data
- blocking index
- Christina Whiteside
- Computers and Internet
- Data Governance
- data profiling
- data quality
- data quality mart
- Diabetic Come
- Dimensional Model
- Donal Farmer
- Download Complete Data Profiling Kit – TSQL
- extreme scoping
- Extreme Scoping Agile SCRUM
- Fuzzy Matching
- Guerilla MDM
- Guerilla MDS
- Ira Whiteside
- Jack Stasiewicz
- Jacob Stasiewicz
- Julia Stasiewicz
- Kimball ETL (subsystem 1)
- Machine Learning
- Master Relationship Management
- MDS Stored Procedures
- Metadata Mart
- PONS Stroke
- Self Service Semantic BI
- Soul Mate
- SQL Saturday
- SQL Server Central
- Table Variable TSQL
- Tessie Whiteside
- Theresa Whiteside
- Victoria Stasiewicz
I’ve been researching the science behind DNA and it’s ability to read stored organized and structured instructions/information and then for specifically designed processes in our bodies to create millions of components , with specific functions capability of specific further processes, specifically designed processes and the highly precise capability required to assemble a human based on those instructions(data) and processes.
It does not seem logical to me that people, especially people in IT and with a programing and data/information background would believe that something as complex as processes that exist to create ourselves were somehow created by accident, or randomly. That’s actually like telling a client or business managers, leaders that we don’t need to modify the software, that the software will be able to fix itself.
And by that I mean I know full well that we’re capable of DESIGNING software they can handle situations that we pre-designed it for , however to create software that will create as well modify itself or create output different than the original design we would call it artificially INTELLIGENT. There is no example, that I know of where we will realistically tell the client that we’ve created a program , say for a 3-D printer that will take in certain data and then if they wait long enough, It will by accident or randomly improve itself and more importantly create a whole new object completely different than what it was designed to do
This is an excellent post that explains the details behind this concept.
more to come ….
John Indelicato – Founder
Victoria (Whiteside) Stasiewicz – Bachelor of Science in Information Management – Herzing University 2018
Our daughter Victoria Stasiewicz graduated today and with your husband Brandon Stasiewicz. Theresa and I are very proud of Victoria and her incredible accomplishments.
The primary take away from this article will be that you don’t start your Machine Learning project, MDM , Data Quality or Analytical project with “data” analysis, you start with the end in mind, the business objective in mind. We don’t need to analyze data to know what it is, it’s like oil or water or sand or flour.
Unless we have a business purpose to use these things, we don’t need to analyze them to know what they are. Then because they are only ingredients to whatever we’re trying to make. And what makes them important is to what degree they are part of the recipe , how they are associated
Business Objective: Make Desert
Business Questions: The consensus is Chocolate Cake , how do we make it?
Business Metrics: Baked Chocolate Cake
Metric Decomposition: What are the ingredients and portions?
2/3 cup butter, softened
1-2/3 cups sugar
3 large eggs
2 cups all-purpose flour
2/3 cup baking cocoa
1-1/4 teaspoons baking soda
1 teaspoon salt
1-1/3 cups milk
Confectioners’ sugar or favorite frosting
So here is the point you don’t start to figure out what you’re going to have for dessert by analyzing the quality of the ingredients. It’s not important until you put them in the context of what you’re making and how they relate in essence, or how the ingredients are linked or they are chained together.
In relation to my example of desert and a chocolate cake, an example could be, that you only have one cup of sugar, the eggs could’ve set out on the counter all day, the flour could be coconut flour , etc. etc. you make your judgment on whether not to make the cake on the basis of analyzing all the ingredients in the context of what you want to, which is a chocolate cake made with possibly warm eggs, cocunut flour and only one cup of sugar.
Again belaboring this you don’t start you project by looking at a single entity column or piece of data, until you know what you’re going to use it for in the context of meeting your business objectives.
Applying this to the area of machine learning, data quality and/or MDM lets take an example as follows:
Business Objective: Determine Operating Income
Business Questions: How much do we make, what does it cost us.
Business. Metrics: Operating income = gross income – operating expenses – depreciation – amortization.
Metric Decomposition: What do I need to determine a Operating income?
Gross Income = Sales Amount from Sales Table, Product, Address
Operating Expense = Cost from Expense Table, Department, Vendor
Dimensions to Analyze for quality.
You may think these are the ingredients for our chocolate cake in regards to business and operating income however we’re missing one key component, the portions or relationship, in business, this would mean the association,hierarchy or drill path that the business will follow when asking a question such as why is our operating income low?
For instance the CEO might first ask what area of the country are we making the least amount of money?
After that the CEO may ask well in that part of the country, what product is making the least amount of money and who manages it, what about the parts suppliers?
Product => Address => Department => Vendor
Product => Department => Vendor => Address
Many times these hierarchies, drill downs, associations or relationships are based on various legal transaction of related data elements the company requires either between their customers and or vendors.
The point here is we need to know the relationships , dependencies and associations that are required for each business legal transaction we’re going to have to build in order to link these elements directly to the metrics that are required for determining operating income, and subsequently answering questions about it.
No matter the project, whether we are preparing for developing a machine learning model, building an MDM application or providing an analytical application if we cannot provide these elements and their associations to a metric , we will not have answered the key business questions and will most likely fail.
The need to resolve the relationships is what drives the need for data quality which is really a way of understanding what you need to do to standardize your data. Because the only way to create the relationships is with standards and mappings between entities.
The key is mastering and linking relationships or associations required for answering business questions, it is certainly not just mastering “data” with out context.
We need MASTER DATA RELATIONSHIP MANAGEMENT
MASTER DATA MANAGEMENT.
So final thoughts are the key to making the chocolate cake is understanding the relationships and the relative importance of the data/ingredients to each other not the individual quality of each ingredient.
This also affects the workflow, Many inexperienced MDM Data architects do not realize that these associations form the basis for the fact tables in the analytical area. These associations will be the primary path(work flow) the data stewards will follow in performing maintenance , the stewards will be guided based on these associations to maintain the surrounding dimensions/master entities. Unfortunately instead some architects will focus on the technology and not the business. Virtually all MDM tools are model driven APIs and rely on these relationships(hierarchies) to generate work flow and maintenance screen generation. Many inexperienced architects focus on MVP(Minimal Viable Product), or technical short term deliverable and are quickly called to task due to the fact the incurred cost for the business is not lowered as well as the final product(Chocolate Cake) is delayed and will now cost more.
Unless the specifics of questionable quality in a specific entity or table or understood in the context of the greater business question and association it cannot be excluded are included.
An excellent resource for understanding this context can we found by following: John Owens
Final , final thoughts, there is an emphasis on creating the MVP(Minimal Viable Product) in projects today, my take is in the real world you need to deliver the chocolate cake, simply delivering the cake with no frosting will not do,in reality the client wants to “have their cake and eat it too”.
Operating Income is a synonym for earnings before interest and taxes (EBIT) and is also referred to as “operating profit” or “recurring profit.” Operating income is calculated as: Operating income = gross income – operating expenses – depreciation – amortization.