我是靠谱客的博主 矮小雪糕,最近开发中收集的这篇文章主要介绍模型使用透贴后出现层级错误_错误使用软件模型的后果,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

模型使用透贴后出现层级错误

数据科学家来自火星,软件工程师来自金星(第2部分) (Data Scientists are from Mars and Software Engineers are from Venus (Part 2))

In Part 1 of this series on data scientists are from Mars and software engineers are from Venus we examined the five key dimensions of difference between software and models. The natural follow on question to ask is — So What? Does it really matter if models are conflated with software and data scientists are treated as software engineers? After all for a large cross-section of the population, and more importantly the business world, the similarities between them are far more visible than their differences. In fact, Andrej Karpathy refers to this new way of solving problems using models as Software 2.0. If they are really the next iteration of software are these differences really consequential.

在本系列的第1部分中,数据科学家来自火星,软件工程师来自金星,我们研究了软件和模型之间差异的五个关键方面。 问自然要问的是- 那又怎样? 如果将模型与软件混合,并且将数据科学家视为软件工程师,这真的有关系吗? 毕竟,对于很大一部分人来说,更重要的是对于商业世界而言,他们之间的相似之处远比区别更为明显。 实际上,Andrej Karpathy提到了使用模型作为Software 2.0解决问题的新方法。 如果它们确实是软件的下一个迭代版本,那么这些差异确实是必然的。

The challenges of building models is exasperated when we conflate models and software. In this blog, we describe the twelve ‘traps’ we face when we conflate the two and argue that we need to be cognizant of the differences and address them accordingly.

当我们合并模型和软件时,构建模型的挑战会激怒。 在此博客中,我们描述了当我们将两者混为一谈时面临的十二个“陷阱”,并认为我们需要认识到差异并相应地加以解决。

数据陷阱 (Data Trap)

As we examined in our previous blog, models are formal mathematical representations that can be applied to or calibrated to fit data. Hence, data is the starting point for building a model. While test data is critical for building software, one can start building an algorithm from a given specification before collecting or preparing the test data.

正如我们在之前的博客中检查的那样,模型是形式化的数学表示形式,可以应用于或校准以适合数据 。 因此,数据是建立模型的起点。 尽管测试数据对于构建软件至关重要,但可以在收集或准备测试数据之前根据给定的规范开始构建算法。

However, when it comes to building models the data has to be of good quality (i.e., garbage in, garbage out), available in sufficient quantity, and for supervised learning models also labeled (i.e., a label is a response variable that is being predicted by the model). The data also needs to be fit for purpose. One example of this is that the data should be representative of the population that we will be using when the model is deployed in production. Recent examples of skin type and gender biases of facial recognition models underscores the importance of having a representative (and a statistically significant) dataset for building models. Such data biases are surprisingly common in practice.

但是,在构建模型时,数据必须具有良好的质量(即,垃圾进场,垃圾出),并且要有足够的数量,并且对于有监督的学习模型也应进行标记(即,标记是正在被响应的变量)。由模型预测)。 数据也需要适合目的。 这样的一个例子是,数据应代表模型在生产中部署时将要使用的总体。 面部识别模型的皮肤类型和性别偏见的最新示例强调了建立具有代表性(且具有统计意义的数据集)以建立模型的重要性。 这样的数据偏差在实践中令人惊讶地普遍。

We have seen the failure to address this challenge of gathering, curating, and labeling the necessary data needed to build a model as one of the significant traps of mistaking models to be similar to software. A number of companies eager to launch their AI or ML programs pay very little attention to this aspect and start building models with very little data. For example, a company recently wanted to build a NLP (natural language processing) model to extract structured information from documents with just eight PDF documents. The cost and the time required — especially from domain experts (e.g., legal experts or clinicians) — makes labeling a significant challenge. While techniques are evolving to learn from less data and also assist experts to label data as part of their normal work, having sufficient, good labeled data is still a significant departure from the way models are built vs how software is traditionally developed.

我们已经看到,未能解决收集,管理和标记构建模型所需的必要数据的挑战,这是将模型误认为类似于软件的重要陷阱之一。 许多渴望启动其AI或ML程序的公司很少关注这一方面,并​​开始使用很少的数据来构建模型。 例如,一家公司最近想要建立一个NLP(自然语言处理)模型,以从只有八个PDF文档的文档中提取结构化信息。 所需的成本和时间(尤其是来自领域专家(例如,法律专家或临床医生)的费用)和时间使标记成为一项重大挑战。 尽管技术不断发展以从较少的数据中学习,并可以帮助专家在正常工作中标记数据,但拥有足够的,良好的标记数据仍然与模型的构建方式以及传统的软件开发方式大相径庭。

In summary, the data trap can be further categorized as data volume trap, data quality trap, data bias trap, and data labeling trap. A company can suffer from one or more of these traps. Getting a realistic sense of the data trap is critical to ensuring you don’t go down the wrong path and spend millions on your modeling effort and not realizing the expected returns. In addition, understanding these traps can also change the way you address your modeling effort by first collecting more labeled data or looking for alternative rule-based ways of solving the problems.

总之, 数据陷阱可以进一步分为数据量陷阱数据质量陷阱数据偏差陷阱数据标签陷阱 。 公司可能会遭受这些陷阱中的一个或多个。 切实了解数据陷阱对于确保您不会走错路并花费数百万美元进行建模工作和实现预期收益至关重要。 此外,了解这些陷阱还可以通过首先收集更多带标签的数据或寻找解决问题的基于规则的替代方法来改变您解决建模工作的方式。

范围陷阱 (Scoping Trap)

With more than three to four decades of software engineering practices and methodologies, software developers and system analysts, have become reasonably good (or at least much better than model developers) at estimating the time required to build and test software. With agile software development methods, software can be developed incrementally and iteratively in fixed time periods — typically in two-week or four-week sprints.

凭借超过三到四十年的软件工程实践和方法论,软件开发人员和系统分析人员在估计构建和测试软件所需的时间方面已经相当不错(或至少比模型开发人员更好)。 使用敏捷的软件开发方法,可以在固定的时间段内(通常以两周或四周的sprint)递增和迭代地开发软件。

Assuming that we want our models to satisfy certain performance criteria (e.g., accuracy, precision, recall, etc), it is hard to estimate the effort and duration it will take to achieve the results. Worse, we may not be able to tell a priori if we can, in fact, succeed in satisfying the performance criteria. In addition, the difficulty of meeting the performance criteria may be non-linear. For example, in one of our recent client projects we were able to achieve 90% accuracy with a decision tree model within a couple of weeks. However, the client was aiming for a 99% accuracy. After spending a couple of months the accuracy could get no better than 93% with a neural network model.

假设我们希望我们的模型满足某些性能标准(例如,准确性,精度,召回率等),则很难估计获得结果所需的工作量和持续时间。 更糟糕的是,我们可能无法先验地告诉我们我们是否能够成功满足性能标准。 另外,满足性能标准的困难可能是非线性的。 例如,在我们最近的一个客户项目中,我们能够在几周内通过决策树模型实现90%的准确性。 但是,客户的目标是达到99%的准确性。 在花费了几个月的时间之后,使用神经网络模型的准确性可能不会超过93%。

Lukas Biewald gives another classic example, where in one of the Kaggle competitions thousand of people around the world participated in a contest to improve the accuracy of the model from a baseline of 35% accuracy to 65% accuracy in just one week. However, subsequently, even after several months and several thousand people trying to improve this result the best they managed was a 68% accuracy — a mere 3% improvement.

卢卡斯·比瓦尔德(Lukas Biewald)举了另一个经典的例子,在一次Kaggle竞赛中,全世界成千上万的人参加了一次竞赛,目的是在一周内将模型的精度从35%的基准提高到65%的基准。 但是,随后,即使经过几个月和成千上万的人试图改善这一结果,他们所设法实现的最佳结果还是达到了68%的准确性-仅改善了3%。

We call this the scoping trap where data scientists are unable to scope the effort and duration (or time) required, the data required, and the computational resources required to achieve a certain performance criteria (e.g., accuracy). This scoping trap can occur at different stages of the model. It might be difficult to scope the model to achieve a certain performance before the model is built — what we call as the pre-build scoping trap. The training scoping trap is when the data scientists are unable to tell how long they should continue training the model — with new data, new techniques, additional resources etc — in order achieve the performance criteria during the training phase.

我们称其为范围界定陷阱 ,数据科学家无法在此范围内确定实现特定性能标准(例如准确性)所需的工作量和持续时间(或时间),所需的数据以及所需的计算资源。 此范围陷阱可能发生在模型的不同阶段。 在构建模型之前,可能难以确定模型的范围以达到一定的性能,这就是所谓的预构建范围陷阱培训范围陷阱是指数据科学家无法告知他们应该继续培训模型多长时间(使用新数据,新技术,其他资源等),以便在培训阶段达到性能标准。

These two traps can drive a product manager, scrum master or project manager crazy when it comes to embedding models within a traditional software or delivering on a data science project. In large software development efforts we have often seen the ‘voice’ of the data scientist being ignored, forcing data scientists to perform simple descriptive analytics and not generating any insights from the model, due to tight and fixed deadlines. Alternatively, they might develop rule-based models as opposed to truly ML models that are brittle. We believe that this is one of the significant issues in many AI/ML projects not delivering on their stated ROI (Return on Investment).

当将模型嵌入传统软件或交付数据科学项目时,这两个陷阱会使产品经理,scrum管理员或项目经理疯狂。 在大型软件开发工作中,我们经常会看到数据科学家的“声音”被忽略,由于期限紧迫和固定,迫使数据科学家执行简单的描述性分析并且没有从模型中产生任何见解。 另外,他们可能会开发基于规则的模型,而不是真正的脆性ML模型。 我们认为,这是许多AI / ML项目未实现其既定ROI(投资回报率)的重要问题之一。

When one builds models that can learn continuously we are faced with an additional challenge. Let’s say the target accuracy determined by the business for model deployment is 90% accuracy and the trained model has achieved 86% accuracy. The business and data scientists together can take a decision to deploy the model and have the model continuously learn and hope that the accuracy crosses the 90% threshold. Once again, the data scientists will be unable to scope if and when the model will cross this threshold and under what conditions. We call this variant the deployment scoping trap.

当人们建立可以持续学习的模型时,我们面临着另一项挑战。 假设企业为模型部署确定的目标准确性为90%准确性,而经过训练的模型已达到86%准确性。 业务和数据科学家可以共同决定部署模型,并让模型不断学习,并希望准确性超过90%的阈值。 再一次,数据科学家将无法确定模型是否以及何时超过该阈值以及在什么条件下进行范围划分。 我们将此变型称为部署范围陷阱

Finally, models could suffer from model drift where the performance of the production model decreases because the underlying conditions change. Such model drift could happen abruptly or continuously. Once again, the data scientists will be unable to scope the nature, timing and deterioration of the model accuracy. We call this the drift scoping trap. As a result, one needs to institute model monitoring practices to measure and act on such model drifts.

最后,模型可能会遭受模型漂移的影响 ,其中生产模型的性能会下降,因为基础条件会发生变化。 这种模型漂移可能突然或连续发生。 数据科学家将再次无法确定模型准确性的性质,时间安排和恶化。 我们称其为漂移范围陷阱 。 结果,需要建立模型监视实践来测量这种模型漂移并对其采取行动。

In summary, the scoping trap can be further categorized into pre-build scoping, training scoping, deployment scoping, and drift scoping traps. The figure below highlights these different types of scoping traps using an illustrative example.

总而言之, 作用域陷阱可以进一步分为构建前范围训练范围部署范围漂移范围陷阱 。 下图通过一个说明性示例突出显示了这些不同类型的作用域陷阱。

image for post
Scoping traps and how they manifest pre-build, during training and after deployment
范围陷阱及其在训练前和部署后如何表现出来

返回陷阱 (Return Trap)

Business sponsors and project managers often have to show the expected ROI before embarking on building any large scale software. As data science projects become more common in enterprises it is natural that business leaders want to understand the expected ROI before making or prioritizing their investments. While estimating returns on a new piece of software is not easy, the task gets even more complex when it comes to the expected ROI of models.

在开始构建任何大型软件之前,业务发起人和项目经理通常必须显示出预期的ROI。 随着数据科学项目在企业中变得越来越普遍,业务领导者自然希望在进行投资或确定投资优先顺序之前了解预期的投资回报率。 虽然估算新软件的回报并不容易,但要达到模型的预期投资回报率,任务就变得更加复杂。

Conceptually, ROI is a relatively straightforward computation — it is the net benefits over costs

从概念上讲,ROI是一种相对简单的计算方式-它是成本带来的净收益

ROI = (Benefits from model — Cost of model)/Cost of model

投资回报率=(来自模型的收益-模型的成本)/模型的成本

The benefits of AI/ML models in companies typically fall under two broad categories — efficiency and effectiveness. When companies automate manual or cognitive tasks that are repetitive in nature they improve the efficiency of the process, reduce the time it takes to perform these tasks, and improve productivity of their labor force. When companies use models to make better decisions to augment humans making decisions they improve the effectiveness of their decisions. In other words, the benefits accrue from being faster and better. The question we need to ask is — faster and better relative to what baseline? It is in estimating this baseline that companies often fall short.

公司中AI / ML模型的好处通常分为两大类: 效率有效性 。 当公司使本质上重复的手动或认知任务自动化时,它们将提高流程的效率,减少执行这些任务所需的时间,并提高其劳动生产率。 当公司使用模型做出更好的决策来增强人的决策能力时,它们会提高决策的有效性。 换句话说,受益于更快更好 。 我们需要问的问题是-相对于什么基线更快更好。 正是在估计此基准时,公司经常会达不到目标。

When automating a task we need to have a baseline of how long does it take for a human to perform the task? Unfortunately, estimating how long someone takes to perform a task — especially when it is a cognitive task (e.g., assessing the risk of a customer) or a non-repetitive task (e.g., handling exceptions in expense approval) is not easy. People with different skills, backgrounds, and tenure might take different times to complete the task. A proper analysis of all these factors to determine the true duration of a task is a non-trivial exercise and also may be impractical in a service organization or knowledge-based organization with a wide variety of tasks spanning a spectrum of complexity levels.

在自动化任务时,我们需要一个基准来确定人类执行任务需要多长时间? 不幸的是,估计某人执行一项任务需要多长时间-尤其是当它是一项认知任务(例如,评估客户的风险)或非重复性任务(例如,处理费用批准中的例外情况)时,并不容易。 具有不同技能,背景和任期的人可能需要不同的时间才能完成任务。 对所有这些因素进行适当的分析以确定一项任务的真正持续时间是一项艰巨的工作,并且对于服务组织或基于知识的组织(其任务涉及多种复杂程度,涉及多种复杂程度),这可能也是不切实际的。

Another common problem in deriving the baseline for efficiency is that it might just be difficult to isolate the estimation of the given task amongst all the other tasks one does. Take the example of a purchasing manager who amongst her different activities in a day examines a purchase order in the system, cross-checks with the packaging slip and vendor invoice to determine if the transaction is accurate. Let’s say we have built a NLP model to extract key fields from the invoice so that they can be reconciled with the purchasing order. Even for this single individual, estimating the total time they spend on invoice processing may be difficult to compute, as this task is embedded with other tasks, such as attending meetings, inspecting shipment etc. and is dependent on the complexity of the purchase order, invoices, and packaging slips (e.g., the complexity and time increases if the shipment is a delivery across multiple purchase orders or multiple invoices).

得出效率基准的另一个常见问题是,可能很难将给定任务的估计与其他所有任务分开。 以一个采购经理为例,他在一天的不同活动中检查系统中的采购订单,并与包装单和供应商发票进行交叉核对,以确定交易是否准确。 假设我们已经建立了一个NLP模型来从发票中提取关键字段,以便可以与采购订单进行对帐。 即使是这个人,也很难估算他们在发票处理上花费的总时间,因为此任务与其他任务(例如参加会议,检查货运等)一起嵌入,并且取决于采购订单的复杂性,发票和包装单(例如,如果装运是跨多个采购订单或多个发票的交货,则复杂性和时间会增加)。

When it comes to getting a baseline for effectiveness we get into an even more challenging endeavor. Efficiencies were computed for tasks — discrete activities that can be measured for the duration it takes. However, when it comes to effectiveness we are evaluating decisions and actions. How do we determine if one action is better than the other? The results from an action are multi-dimensional, could be uncertain, and delayed in its effect. Let us say you are driving your vehicle and just as you are nearing a signal the green light turns amber. Do you apply your breaks to stop the vehicle risking a car closely following behind you to potentially hit you or do you cross on amber (still legal)? Which action is better and in what way — better for the vehicle behind you, better in terms of fuel consumption, better in terms of obeying the law more strictly. While this was a straightforward action, estimating the baseline for decisions are even more complex.

当要获得有效性基准时,我们将进行更具挑战性的工作。 计算任务的效率-可以在其持续时间内测量的离散活动。 但是,谈到有效性,我们正在评估决策行动。 我们如何确定一项行动是否优于另一项? 动作的结果是多维的,可能是不确定的,并且其效果会延迟。 假设您正在驾驶汽车,正当您接近信号灯时,绿灯变成琥珀色。 您是否使用休息时间来阻止车辆冒着紧跟在您身后的汽车的危险,可能会撞到您,或者您是否横穿琥珀色(仍然合法)? 哪种方式更好,哪种方式更好-对您身后的车辆来说更好,油耗方面更好,更严格地遵守法律方面更好。 尽管这是一个简单的动作,但是估计决策的基准更加复杂。

So far, we have just examined the estimation of a baseline for efficiency and effectiveness. This must occur before we start building the model so that we have a good idea of what performance we require of our model. We call this as the return estimation trap. There is another type of return trap that occurs when we have built our model and deployed it and are now trying to realize the benefits. We call this the return realization trap.

到目前为止,我们刚刚检查了效率和有效性基线的估计。 这必须在开始构建模型之前发生,以便我们对模型所需的性能有一个很好的了解。 我们称其为收益估算陷阱 。 当我们建立模型并部署模型并尝试实现收益时,会发生另一种类型的返回陷阱。 我们称其为回报实现陷阱

Once again we run into issues while calculating the returns. In the case of efficiency benefits we may be able to categorically show that the automation was able to reduce the time required to complete a task. Let’s say your automated invoice processing model has reduced the average time for processing an invoice from 30 minutes to 15 minutes. If the person processes four invoices a day, the person will be saving an hour a day or five hours in a week. Now let’s say the person works ten hours a day or fifty hours a week. The savings in time is 10%. However, there may not be a tangible $-benefit to the company. This can happen due to a number of reasons. The employee is already working a 50-hour week and with a 5-hour saving they might just reduce the number of hours they work. This might still be an overall benefit to the organization in terms of employee satisfaction and retention — but we probably have not factored this into our benefit estimation. Even if they were working only the required 40 hrs a week and we saved them 5 hrs due to automation, they might find other things to do to fill the gap as opposed to the organization being able to monetize the fractional time savings. This is one of the biggest challenges with RPA (Robotic Process Automation) and IPA (Intelligent Process Automation) where time is saved for individuals so that there is a decrease in FTE (Full-Time Equivalents), but this savings does not translate into headcount reduction where you can clearly demonstrate the return from automation.

在计算收益时,我们再次遇到问题。 在提高效率方面,我们也许可以明确地表明自动化可以减少完成任务所需的时间。 假设您的自动发票处理模型将处理发票的平均时间从30分钟减少到15分钟。 如果此人每天处理四张发票,则该人每天将节省一个小时或一周中节省五个小时。 现在,假设这个人一天工作10个小时或每周工作50个小时。 节省的时间为10%。 但是,该公司可能没有明显的美元收益。 发生这种情况可能有多种原因。 该员工每周工作50小时,而节省了5个小时,他们可能只是减少了工作时间。 就员工满意度和保留率而言,这可能仍是组织的整体利益-但我们可能没有将其纳入我们的利益估算中。 即使他们每周只需要工作40个小时,并且由于自动化我们也节省了5个小时,但他们可能会找到其他方法来填补空白,这与组织能够通过节省部分时间来获利相反。 这是RPA(机器人过程自动化)和IPA(智能过程自动化)所面临的最大挑战之一,节省了个人时间,从而减少了FTE(全时当量),但是这种节省并不能转化为员工人数减少,您可以清楚地展示自动化的回报。

When it comes to realizing the benefits of effectiveness of decisions or actions we run into similar issues as well. The biggest challenge in these cases it the challenge of attribution. When an action can be shown to be measurably better than an alternative, it is not always possible to isolate the entire context in which this action was performed. For example, in the case of stopping the vehicle or crossing an intersection on amber, the act of stopping suddenly might be better when it is a dry and sunny day and could potentially be the wrong choice on a wet, slippery, snowy day. In this case, you cannot completely attribute all of the benefits of stopping on a dry and sunny day to your action — part of the credit goes to mother nature for providing you with the right environment for your action. This attribution challenge is all too common when it comes to evaluating decisions and actions, where competitors, customers, suppliers, regulations and a host of other stakeholders might have a hand in making an action or decision ‘better’ or ‘worse’.

在实现决策或行动的有效性带来的好处时,我们也会遇到类似的问题。 在这些情况下,最大的挑战是归因的挑战。 当可以证明某个措施比其他措施要好得多时,不一定总是可以隔离执行此操作的整个上下文。 例如,在停止车辆或琥珀色横穿交叉路口的情况下,在干燥和晴天时突然停止的行为可能会更好,并且在潮湿,湿滑,下雪的日子可能是错误的选择。 在这种情况下,您不能完全将在晴天和晴天停下来的所有好处都归因于您的行动 -功劳的一部分归功于大自然,为您提供了正确的行动环境。 在评估决策和行动时,这种归属挑战非常普遍,竞争对手,客户,供应商,法规和许多其他利益相关者可能会做出“更好”或“更糟”的行动或决策。

The reason that the return trap is more acute for models as opposed to software is because we are comparing the performance of these models with human performance. In cases where humans are unable to perform certain tasks at the speed of automated models (e.g., algorithmic trading) or the model can evaluate a humanly impossible number of choices and make the right decision (e.g., playing the game Go or Chess), the value of models will be reasonably clear. However, in a majority of the cases where models are automating tasks or augmenting human decisions or actions the value trap is a significant challenge to contend with.

与软件相比,模型的返回陷阱更为尖锐,这是因为我们正在将这些模型的性能与人工性能进行比较。 如果人们无法以自动化模型的速度执行某些任务(例如算法交易),或者该模型可以评估人为无法选择的数量并做出正确的决定(例如玩Go或Chess游戏),模型的价值将很明显。 但是,在大多数情况下,模型使任务自动化或增加人工决策或行动,价值陷阱是要应对的重大挑战。

In summary, we end up with four different types of return trapsreturn efficiency estimation, return effectiveness estimation, return efficiency realization, and return effectiveness realization traps.

总而言之,我们得出四种不同类型的收益陷阱收益效率估算收益有效性估算收益效率实现收益有效性 实现陷阱。

摘要 (Summary)

We have looked at three broad categories of traps and a total of twelve different sub-categories as shown below.

我们已经研究了陷阱的三个主要类别以及总共十二个不同的子类别,如下所示。

image for post
Twelve traps of models across three different categories
三种不同类别的模型的十二个陷阱

In Part 1 we examined five dimensions of differences between models and software. The data trap discussed above largely stems from the fundamental way in which models are constructed to fit the data. In addition, the uncertainty around the output and the inductive inference mechanism also contribute to the different data traps. The scoping trap arises from the need to be scientific (i.e., test-and-learn or experimental approach) in training the models. The same scoping traps are also common in the pharmaceutical sector where scientists cannot estimate the time required to find a drug to cure a condition or whether a drug will successfully pass the different clinical trials. Similarly, even after the drug is released in the market its efficacy can drop (e.g., antibiotic-resistant bacteria). The drift scoping trap is an effect of the dynamic manner in which the decision space evolves. Finally, the return traps occur due to the experimental and scientific mindset of models and also the dynamic nature of the decision space.

在第1部分中,我们研究了模型和软件之间差异的五个维度。 上面讨论的数据陷阱很大程度上源自构建模型以适合数据的基本方法。 此外,输出周围的不确定性和归纳推理机制也会导致不同的数据陷阱。 范围陷阱来自于训练模型时需要科学的方法(即测试和学习方法或实验方法)。 同样的作用域陷阱在制药领域也很普遍,科学家们无法估计找到能够治愈某种疾病的药物或药物能否成功通过不同临床试验所需的时间。 同样,即使在药物投放市场后,其功效也会下降(例如,抗药性细菌)。 漂移作用域陷阱是决策空间演变的动态方式的影响。 最后,由于模型的实验性和科学性以及决策空间的动态性质,出现了返回陷阱。

In subsequent blogs, we will look at some of the best practices to address these traps and challenges of scoping, building, and delivering models.

在随后的博客中,我们将介绍一些最佳实践,以解决这些陷阱以及范围界定,构建和交付模型的挑战。

Authors: Anand S. Rao and Joseph Voyles

作者: Anand S. Rao和Joseph Voyles

翻译自: https://towardsdatascience.com/consequences-of-mistaking-models-for-software-94d813f115f5

模型使用透贴后出现层级错误

最后

以上就是矮小雪糕为你收集整理的模型使用透贴后出现层级错误_错误使用软件模型的后果的全部内容,希望文章能够帮你解决模型使用透贴后出现层级错误_错误使用软件模型的后果所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(45)

评论列表共有 0 条评论

立即
投稿
返回
顶部