9.14.2005

 

Robo-justice

A fascinating article that predicts that AI software can do quite a bit of the work done by lawyers today:

Robo-justice - The Boston Globe

From the article:Of course it is quite easy to imagine AI software taking over the entire judicial system eventually and doing a better job than humans because the software would be unemotional.

There are about 700,000 lawyers in the U.S., and like doctors and pilots they are highly paid. There is a lot of economic pressure to eliminate them. If software can take over half of these jobs, it would save the economy billions of dollars. See also Robots taking jobs.

Comments:
Well, it is called Legal Code.., after all.

In Accelerando, lawsuits are dispatched electronically. (Programmers would call this a "programmatic interface" to lawsuits.) There is hardly a corporation (corpus, body) that is not an AI. The president, vice-president, treasurer, - all AI.
 
I'm having a hard time understanding "unemotional" There's NO SUCH THING. ALL "logic" comes from Emotion.

What would an "unemotional" system compute? NOTHING, an EMOTIONAL progammer will program the damn thing.

This is why AI is having is having trouble, because it must place value on things, and the only way for it to set that value is to set it from a irrational emotional human perspective.
 
I think we're only talking (at this point in history) about imagining automating basic court cases. "He said, she said, the evidence said, and the law says..."

Not things that require deep consideration of the constitution, present day social realities, etc., etc.,.
 
The good news is that most lawyers and doctors are very smart people, and the normal screed, "what about the unemployed!" doesn't apply at all. If anyone should be able to find a non-automated profession, it's them.

A judge should NOT be automated. The entire practice of minimum sentencing and other close-to-automation requirement has been a disaster. If anything recidivism can be reduced with more creative and individual sentences.
 
Phew! Well, we're still safe, then. That's a relief!
 
Off topic, but...

Just imagine what a Manna-like system could do with this information.
 
The NISTEP 2001-2030 technology foresight report places "Development of software (expert systems) capable of completely taking the place of specialist professions such as judges, lawyers and patent attorneys" in 2025, but is considered unlikely to ever be realized by 48% of respondents.
 
What a crock. Most people don't seem to realize that laws have developed the way they have because those laws benefit people in some way. Be it road rules that cause tickets to be issued which enrich police departments, or tax laws that enrich the government, special interests and lobbying insiders... I could go on. The fact is, human interests are behind the creation of laws. Robots/computers would gain nothing from creating or enforcing laws... only the people behind them.
 
Robots/computers would gain nothing from creating or enforcing laws... only the people behind them.

Is that an argument against the concept of automated justice systems?

If so, it's a very strange one. I'm imagining you in the early 1900's. Someone tells you, "One day, police will use police cars." To which you wisely respond: "No, what a crock. Police cars would gain nothing from creating or enforcing laws... only the people behind them."

To which the proper response is: "Yeah, and, ... so what?"
 
He is saying that automation in law and law enforcement would be subject to the same forces of current law: special interests.

Those designing the system of automation would act in a slef beneficial manner.

This is why automated lawyers make sense, while automated judges or law-makers don't. The lawyers don't decide the rules, they just use them to their best advantage.
 
So, if I understand right, the argument is:

(A) Justice is subvertible.

(B) Automation used in justice would be subvertible.

(C) Therefore, we can't use Automation in our justice system.

The argument defeats itself: We have subvertible justice today, (as assumed,) so why can't we have subvertible justice tomorrow? We already observe, in our daily life, that we make use of non-perfect justice systems. The argument holds perfection as the only workable standard.

The conclusion that we automate lawyers (only) also doesn't make sense. What is so special about the personal lawyer software, such that it is not subvertible? Consider spyware.

If anything, it seems to me that requiring the justice system not apply automation would only cripple it. It's like insisting that everybody can use ever-more-sophisticated guns, weaponry, but not the police.
 
>>If anything, it seems to me that requiring the justice system not apply automation would only cripple it. It's like insisting that everybody can use ever-more-sophisticated guns, weaponry, but not the police.<<

NOW you're beginning to think like a Libertarian. Welcome. :)

www.lp.org
 
All systems are subvertible; Libertarianism does not follow.
 
All systems can turn bad. Not all systems have legitimate violence behind them, like the government does.

I can choose not to participate in private dealings with whatever company or individual I choose.

I have no such choice with the law, but to move.

In the market, you have unanimity without conformity. Everyone interacts through voluntary, mutual-benefit. Yet, there is no standard to be followed.

In government, you have conformity with unanimity. There is always a disagreeing minority, though EVERYONE is required to follow the laws or face violence.

I will almost ALWAYS prefer the former. The weaker the latter the better, as long as it stays effective at curtailing strong neighborhood conditions like nuclear waste dumping and murder.

Automation in justice does not mean better IT. That is merely increased productivity. Having a program, written by a special-interested third party, make decision, is worse than having an accountable individual make those decisions.

The only preference I would have is to have an investment in better lawyer/advocate expert systems. It could scale easily to become a free resource.


To stress, accountability in a limited government is the ideal. Automation masks the true decision makes. If it were all open source, that would be a different story...
 
YAH Yah yah, we've all heard the Libertarian spiel before.
 
Judging will never be automized. Why? Litigating a matter itself is a mistake. The costs associated with going to court are a waste. If both parties "know" what the outcome is going to be (which they would or should if the legal outcome was obvious enough to be decided by an automaton, presumably with an agreed statement of facts), then they would both gain by settling the dispute and sharing the surplus (i.e. the unincurred deadweight loss). If the decision is difficult, however, on account of the facts needing to be determined by a judge (weighing issues of credibility, etc) it is extremely unlikely to be well-done by a machine any time soon. Lawyers and law students can rest easy.
 
Post a Comment

<< Home
Archives © Copyright 2005 by Marshall Brain
Atom RSS

This page is powered by Blogger. Isn't yours?