Monday, October 24, 2016

Merged PR #42 by @eirannejad to RevitPythonShell

This is exciting: I just merged a pull request on the RevitPythonShell repository. Sure, no big deal, except, it kinda is. Here’s why:

I don’t really have a lot of time to improve the RPS right now as I am mainly working on a totally unrelated project, the CityEnergyAnalyst. And I don’t really BIM much anymore.

Turns out, the RPS has a fan base and they’re stepping up with feature requests, but also improvements in the form of pull requests.

A pull request is a collection of commits that you propose to add to the master branch of the RPS software. You can fork your own version of the RPS and with this feature, request your changes be added back to the main version. This is probably the best way to get changes / bug fixes etc. into the software.

In this case, Ehsan Iran-Nejad stepped up and provided a small patch that adds the folder an external script is defined in to the search paths of the interpreter. This makes it easier to split your code into reusable libraries.

BTW: Check out Ehsan’s awesome project pyRevit which makes working with RPS even easier!

Friday, October 21, 2016

An example in refactoring the CEA

This post is the result of a refactoring effort between Shanshan Hsieh and I for the City Energy Analyst, particularly on the pull request #371.

Compare these two snippets of code:


tHC_corr = [0, 0]
delta_ctrl = [0, 0]

# emission system room temperature control type
if control_system == 'T1':
    delta_ctrl = [2.5, -2.5]
elif control_system == 'T2':
    delta_ctrl = [1.2, -1.2]
elif control_system == 'T3':
    delta_ctrl = [0.9, -0.9]
elif control_system == 'T4':
    delta_ctrl = [1.8, -1.8]

# calculate temperature correction
if heating_system == 'T1':
    tHC_corr[0] = delta_ctrl[0] + 0.15
elif heating_system == 'T2':
    tHC_corr[0] = delta_ctrl[0] - 0.1
elif heating_system == 'T3':
    tHC_corr[0] = delta_ctrl[0] - 1.1
elif heating_system == 'T4':
    tHC_corr[0] = delta_ctrl[0] - 0.9
    tHC_corr[0] = 0

if cooling_system == 'T1':
    tHC_corr[1] = delta_ctrl[1] + 0.5
elif cooling_system == 'T2':  # no emission losses but emissions for ventilation
    tHC_corr[1] = delta_ctrl[1] + 0.7
elif cooling_system == 'T3':
    tHC_corr[1] = delta_ctrl[1] + 0.5
    tHC_corr[1] = 0

return tHC_corr[0], tHC_corr[1]


control_delta_heating = {'T1': 2.5, 'T2': 1.2, 'T3': 0.9, 'T4': 1.8}
control_delta_cooling = {'T1': -2.5, 'T2': -1.2, 'T3': -0.9, 'T4': -1.8}
system_delta_heating = {'T0': 0.0, 'T1': 0.15, 'T2': -0.1, 'T3': -1.1, 'T4': -0.9}
system_delta_cooling = {'T0': 0.0, 'T1': 0.5, 'T2': 0.7, 'T3': 0.5}
    result_heating = 0.0 if heating_system == 'T0' else (control_delta_heating[control_system] +
    result_cooling = 0.0 if cooling_system == 'T0' else (control_delta_cooling[control_system] +
except KeyError:
    raise ValueError(
        'Invalid system / control combination: %s, %s, %s' % (heating_system, cooling_system, control_system))

return result_heating, result_cooling

Ideally, these two codes produce the same result. I would argue though, that by using a table-based method, that is, keeping the data in lookup tables like dicts, makes the code a bit clearer to understand than using a lot of if...elif...else statements. This simplifies the code down to it’s core message: The result is the sum of the deltas for the control system and the system type.

Another improvement, in my mind, is getting rid of the X[0] and X[1] constructs. In the original code, these were used to group the heating and the cooling values respectively. While in a small snippet like this, that can still be understood, this paradigm breaks down quickly. One way to improve that would be to assign HEATING = 0 and COOLING = 1 and then index like this: tHC_corr[HEATING]. That communicates the intention much better. Using separate dictionaries for heating and cooling values for both emission systems and control systems sidesteps the issue alltogether.


@shanshan and I went a step further than just changing the code. Here is a model of what the documentation of the function could look like:

Model of losses in the emission and control system for space heating and cooling.

Correction factor for the heating and cooling setpoints. Extracted from EN 15316-2

(see cea\databases\CH\Systems\emission_systems.xls for valid values for the heating and cooling system values)

T0 means there's no heating/cooling systems installed, therefore, also no control systems for heating/cooling.
In short, when the input system is T0, the output set point correction should be 0.0.
So if there is no cooling systems, the setpoint_correction_for_space_emission_systems function input: (T1, T0, T1) (type_hs, type_cs, type_ctrl),
return should be (2.65, 0.0), the control system is only specified for the heating system.
In another case with no heating systems: input: (T0, T3, T1) return: (0.0, -2.0), the control system is only
specified for the heating system.


:param heating_system: The heating system used. Valid values: T0, T1, T2, T3, T4
:type heating_system: str

:param cooling_system: The cooling system used. Valid values: T0, T1, T2, T3
:type cooling_system: str

:param control_system: The control system used. Valid values: T1, T2, T3, T4 - as defined in the
    contributors manual under Databases / Archetypes / Building Properties / Mechanical systems.
    T1 for none, T2 for PI control, T3 for PI control with optimum tuning, and T4 for room temperature control
:type control_system: str


:returns: two delta T to correct the set point temperature, dT_heating, dT_cooling
:rtype: tuple(double, double)

Note: The documentation explains what the function is for and also mentions the relevant standards (EN 15316-2) - this could also be references to research papers or whatever. Further, the input values and types are explained and also a link to the documentation for reference. A list of valid inputs is given.

What is missing? Boundary case behaviour - this should actually also be in the documentation! We test this behaviour in the next section:

Unit tests

Check out the file tests/

import unittest

class TestCorrectionFactorForHeatingAndCoolingSetpoints(unittest.TestCase):
    def test_calc_t_em_ls_raises_ValueError(self):
        from cea.demand.sensible_loads import setpoint_correction_for_space_emission_systems
        self.assertRaises(ValueError, setpoint_correction_for_space_emission_systems, heating_system='T1',
                          cooling_system='T1', control_system=None)
        self.assertRaises(ValueError, setpoint_correction_for_space_emission_systems, heating_system='T1',
                          cooling_system='XYZ', control_system='T1')
        self.assertRaises(ValueError, setpoint_correction_for_space_emission_systems, heating_system='T1',
                          cooling_system=None, control_system='T1')

    def test_calc_t_em_ls_T0(self):
        from cea.demand.sensible_loads import setpoint_correction_for_space_emission_systems
        self.assertEqual(setpoint_correction_for_space_emission_systems('T1', 'T0', 'T1'), (2.65, 0.0))
        self.assertEqual(setpoint_correction_for_space_emission_systems('T0', 'T3', 'T1'), (0.0, -2.0))

Right now, it contains a single class, TestCorrectionFactorForHeatingAndCoolingSetpoints that inherits from unittest.TestCase. When the Jenkins runs, it will pick up this class, a) because the filename starts with test_ and b) because the classes inherit from TestCase. The Jenkins will then run all the test cases. Each method starting with test_ is run and checked. The self.assert* calls test expected values and actual computations of the setpoint_correction_for_space_emission_systems function. We can see that:

  • calling the method with None or 'XYZ' as one of the parameters should raise an instance of ValueError - this is an edge case that is being tested
  • the setpoint correction for heating or cooling emission systems of type 'T0' (no emission system) should be 0.0

Unit tests like this are a great way to describe how a function should behave, independant of the implementation. It also represents another “mode” of thinking, where you consider edge cases, expected values, failure modes etc. at a very low level. At the level of a “unit” of code.

I believe that especially for code like the CEA, we should use this technique to specify the expected behavior of the system, as we will be on the hook for bugs for the next decade or so. We want to be able to prove the correctnes of our implementation!

Wednesday, June 22, 2016

RevitPythonShell for Revit 2017

I have just released a version of RPS for Revit 2017. It is labeled as a “Pre-Release”, since I have not really had time to test it, but you are all welcome to give it a spin and tell me about any problems you find.

Better yet: Send me pull requests with fixes to your problems!

Tuesday, October 20, 2015

Accessing specific overloads in IronPython (Document.LoadFamily)

The RevitPythonShell makes working with Revit a bit easier, because, you know, Python. The specific flavour of Python used is IronPython - a port of the Python language to the .NET platform. IronPython lets you call into .NET objects seamlessly. In theory. Except when it doesen’t. All abstractions are leaky.
This article is all about a specific leak, based on an impedance mismatch between the object model used in .NET and that used in Python: In C#, you can overload a method. A simple way to think about this is to realize that the name of the method includes its signature, the list of parameter (types) it takes. Go read a book if you want the gory details. Any book. I’m just going to get down to earth here and talk about a specific example:
The Document.LoadFamily method.
The standard method for selecting a specific overload is just calling the function and having IronPython figure it out.
I guess the place to read up on how to call overloaded methods is here: To quote:
When IronPython code calls an overloaded method, IronPython tries to select one of the overloads at runtime based on the number and type of arguments passed to the method, and also names of any keyword arguments.
This works really well if the types passed in match the signatures of a specific method overload well. IronPython will try to automatically convert types, but will fail with a TypeError if more than one method overload matches.
The Document.LoadFamily method is special in that one of its parameters is marked as out in .NET - according to the standard IronPython documentation (REF) that should translate into a tuple of return values - and it does, if you know how. It is just non-intuitive - see this question on Stack Overflow:
revitpythonshell provides two very similar methods to load a family.
LoadFamily(self: Document, filename:str) -> (bool, Family)
LoadFamily(self: Document, filename:str) -> bool
So it seems like only the return values are different. I have tried to calling it in several different ways:
(success, newFamily) = doc.LoadFamily(path)
success, newFamily = doc.LoadFamily(path)
o = doc.LoadFamily(path)
But I always just get a bool back. I want the Family too.
What is happening here is that the c# definitions of the method are:
public bool LoadFamily(
    string filename
public bool LoadFamily(
    string filename,
    out Family family
The IronPython syntax candy for out parameters, returning a tuple of results, can’t automatically be selected here, because calling LoadFamily with just a string argument matches the first method overload.
You can get at the overload you are looking for like this:
import clr
family = clr.Reference[Family]()
# family is now an Object reference (not set to an instance of an object!)
success = doc.LoadFamily(path, family)  # explicitly choose the overload
# family is now a Revit Family object and can be used as you wish
This works by creating an object reference to pass into the function and the method overload resultion thingy now knows which one to look for.
Working under the assumption that the list of overloads shown in the RPS help is the same order as they appear, you can also do this:
success, family = doc.LoadFamily.Overloads.Functions[0](path)
and that will, indeed, return a tuple (bool, Autodesk.Revit.DB.Family). I just don’t think you should be doing it that way, as it introduces a dependency on the order of the method overloads - I wouldn’t want that smell in my code…
Note, that this has to happen inside a transaction, so a complete example might be:
import clr
t = Transaction(doc, 'loadfamily')
    family = clr.Reference[Family]()
    success = doc.LoadFamily(path, family)
    # do stuff with the family

Tuesday, July 7, 2015

The __file__ variable in RevitPythonShell

Today I’m going to talk about a special builtin variable __file__ as it is implemented in the RevitPythonShell. This feature is only available to external scripts
and RpsAddins. This is similar to how __file__ is normally defined for Python: It is not defined in the REPL.

Simply put: __file__ contains the path to the current file being run.

With external scripts, this is a path to a python script. With RpsAddins, it is the path to the addin’s DLL, a path separator, and the name of the script (e.g. C:\Program Files (x86)\MyAddin\MyAddin.dll\

Suppose you have deployed some scripts to another computer. Suppose those scripts rely on other files - a database, maybe, or icons, pictures, anything really. If you keep referencing these files as C:\Users\Gareth\AwesomeScripts\all_my_data.sqlite you are going to run into difficulties on a computer that doesn’t belong to Gareth. Meg’s computer won’t know how to find the database! And she is going to complain to Meg and meg is going to complain to her boss and her boss is going to complain to your boss and your boss is going to go get coffee, lock himself up in his office and brood for a very long time. Then, he’s going to send someone to tell you to come to his office ASAP. Now!

You don’t want that to happen! That is why you’re going to make sure that all file references are relative to the installation of your external scripts / RpsAddins. And unless you have a priori knowledge of the folder name you’re installing to, well, I’ve got your back:

def get_folder():
    import os
    # assume external script
    folder = os.path.dirname(__file__)
    if folder.lower().endswith('.dll'):
        # nope - RpsAddin
        folder = os.path.dirname(folder)
    return folder

Tuesday, June 30, 2015

Embedding a webserver in Autodesk Revit with the RevitPythonShell

This is a more elaborate example that shows how to embedd a webserver in Autodesk Revit and use it to automate tasks.

How do you access the BIM from outside Revit? With the Revit API it is easy to access the outside world from within Revit. Sometimes you want to write software that needs to read a schedule from a .rvt document - from outside of Revit.

As an example, say you have a shell script that reads in schedule data from a Revit document and saves it to a CSV file.

One way to solve this is to have Revit act as a web server, say, http://localhost:8080. You could then use curl:

curl http://localhost:8080/schedules/my_schedule_name > my_local_file_name.csv

Let us build a RevitPythonShell script that allows you to do just that: Export any schedule in the BIM as a CSV file through a web service. Depending on the URL requested, you could return a screenshot of the current view or ways to open / close documents:

curl http://localhost:8080/screenshot
curl http://localhost:8080/open/Desktop/Project1.rvt

This is a variation on the non-modal dialog issue (see here too!). We want to run a web server in a separate thread, but have handling requests run in the main Revit thread so that we have access to the API. We will be using an external event to solve this.

The web server itself uses the HttpListenerclass, which runs in a separate thread and just waits for new connections. These are then handled by pushing them into a queue and notifying the ExternalEvent that a new event has happened.

This is where the script starts:

def main():
    contexts = ContextQueue()
    eventHandler = RpsEventHandler(contexts)
    externalEvent = ExternalEvent.Create(eventHandler)
    server = RpsServer(externalEvent, contexts)
    serverThread = Thread(ThreadStart(server.serve_forever))

Whoa! What is going on here?

  • a communication channel contexts is created for sending web requests (stashed as HttpListenerContext instances) to the ExternalEvent thread.
  • an IExternalEventHandler implementation called RpsEventHandler that handles producing the output.
  • a web server wrapped in a method serve_forever that listens for web requests with the HttpListener, stores them into the context queue and notifies the external event that there is work to be done.

We’ll look into each component one by one below. Note: The full code can be found here in the rps-sample-scripts GitHub repository.

Let’s start with the ContextQueue:

class ContextQueue(object):
    def __init__(self):
        from System.Collections.Concurrent import ConcurrentQueue
        self.contexts = ConcurrentQueue[HttpListenerContext]()

    def __len__(self):
        return len(self.contexts)

    def append(self, c):

    def pop(self):
        success, context = self.contexts.TryDequeue()
        if success:
            return context
            raise Exception("can't pop an empty ContextQueue!")

This is nothing speciall - just a thin wrapper arround ConcurrentQueue from the .NET library. The RpsServer will append to the context while the RpsEventHandler pops the context.

A more interesting class to look at is probably RpsEventHandler:

class RpsEventHandler(IExternalEventHandler):
    def __init__(self, contexts):
        self.contexts = contexts
        self.handlers = {
            'schedules': get_schedules
            # add other handlers here

    def Execute(self, uiApplication):
        while self.contexts:
            context = self.contexts.pop()
            request = context.Request
            parts = request.RawUrl.split('/')[1:]
            handler = parts[0]  # FIXME: add error checking here!
            args = parts[1:]
                rc, ct, data = self.handlers[handler](args, uiApplication)
                rc = 404
                ct = 'text/plain'
                data = 'unknown error'
            response = context.Response
            response.ContentType = ct
            response.StatusCode = rc
            buffer = Encoding.UTF8.GetBytes(data)
            response.ContentLength64 = buffer.Length
            output = response.OutputStream
            output.Write(buffer, 0, buffer.Length)

The Execute method here does the grunt work of working with the .NET libraries and delegating requests to the specific handlers. You can extend this class can by adding new handlers to it. In fact, you don’t even need to extend the class to add handlers - just register them in the handlers dictionary.

Each handler takes a list of path elements and a UIApplication object. The handler runs in the Revit API context. It should return an HTTP error code, a content type and a string containing the response.

An example of such a handler is get_schedules:

def get_schedules(args, uiApplication):
    '''add code to get a specific schedule by name here'''
    print 'inside get_schedules...'
    from Autodesk.Revit.DB import ViewSchedule
    from Autodesk.Revit.DB import FilteredElementCollector
    from Autodesk.Revit.DB import ViewScheduleExportOptions
    import tempfile, os, urllib

    doc = uiApplication.ActiveUIDocument.Document
    collector = FilteredElementCollector(doc).OfClass(ViewSchedule)
    schedules = {vs.Name: vs for vs in list(collector)}

    if len(args):
        # export a single schedule
        schedule_name = urllib.unquote(args[0])
        if not schedule_name.lower().endswith('.csv'):
            # attach a `.csv` to URL for browsers
            return 302, None, schedule_name + '.csv'
        schedule_name = schedule_name[:-4]
        if not schedule_name in schedules.keys():
            return 404, 'text/plain', 'Schedule not found: %s' % schedule_name
        schedule = schedules[schedule_name]
        fd, fpath = tempfile.mkstemp(suffix='.csv')
        dname, fname = os.path.split(fpath)
        opt = ViewScheduleExportOptions()
        opt.FieldDelimiter = ', '
        schedule.Export(dname, fname, opt)
        with open(fpath, 'r') as csv:
            result =
        return 200, 'text/csv', result
        # return a list of valid schedule names
        return 200, 'text/plain', '\n'.join(schedules.keys())

When you write your own handler functions, make sure to implement the function signature: rc, ct, data my_handler_function(args, uiApplication).

In get_schedules, a FilteredElementCollector is used to find all ViewSchedule instances in the currently active document. Using a dict comprehension is a nifty way to quickly make a lookup table for checking the arguments.

The args parameter contains the components of the url after the first part, which is used to select the handler function. So if the requested URL were, say, http://localhost:8080/schedules, then args would be an empty list. In this case, we just return a list of valid schedule names, one per line - see the else at the bottom of the function.

If the URL were, say http://localhost:8080/schedules/My%20Schedule%20Name, then the args list would contain a single element, "My%20Schedule%20Name". The %20 encoding is a standard for URLs and is used to encode a space character. We use urllib to unquote the name.

In order to make the function work nicely with a browser, it is nice to have a .csv ending to it - we redirect to the same URL with a .csv tacked on if it is missing! The code for handling the redirect can be found in the full sample script on GitHub. Notice how the HTTP return code 302 is used as the return value for rc - you can look up all the HTTP return codes online, we will only be using 200 (OK), 302 (Found - used for redirects) and 404 (Not Found).

Next, the script checks to make sure the schedule name is a valid schedule in the document. A 404 return code is used to indicate an error here.

The actual code for returning a schedule makes use of a technique described in Jeremy Tammik’s blog post The Schedule API and Access to Schedule Data. The ViewSchedule.Export method is used to write the schedule to a temporary file in CSV format and then read back into memory before deleting the file on disk. This is a bit of a hack and coming up with a better solution is left as an exercise for the reader…

The final piece in our puzzle is the RpsServer:

class RpsServer(object):
    def __init__(self, externalEvent, contexts, port=8080):
        self.port = port
        self.externalEvent = externalEvent
        self.contexts = contexts

    def serve_forever(self):
            self.running = True
            self.listener = HttpListener()
            prefix = 'http://localhost:%i/' % self.port
                print 'starting listener', prefix
                print 'started listener'
            except HttpListenerException as ex:
                print 'HttpListenerException:', ex
            waiting = False
            while self.running:
                if not waiting:
                    context = self.listener.BeginGetContext(
                waiting = not context.AsyncWaitHandle.WaitOne(100)

    def stop(self):
        print 'stop()'
        self.running = False

    def handleRequest(self, result):
        pass the request to the RevitEventHandler
            listener = result.AsyncState
            if not listener.IsListening:
                context = listener.EndGetContext(result)
                # Catch the exception when the thread has been aborted
            print 'raised external event'

This class implements the serve_forever function that starts an HttpListener on a specified port and uses handleRequest to pass any requests on to the external event for processing inside the Revit API context.

Check the example on GitHub.

Monday, June 1, 2015

Using esoreader to parse EnergyPlus eso files

A short while ago I posted a short tutorial on the esoreader module. This post is an update, showing off the new pandas interface that makes life so much easier when exploring EnergyPlus output files.

The building simulation engine EnergyPlus stores its main output in a file with the ending ‘.eso’. This format makes it easy to log variable values during simulation, but is hard to use for post-processing. EnergyPlus offers a sqlite version of this data, but using it requires understanding the eso file format itself. EnergyPlus also can output a csv file, but that is limited in the number of columns.

The esoreader module makes it very easy to explore the output of EnergyPlus, say, in an IPython notebook interactive environment.

I wrote this module as part of my work at the chair for Architecture and Building Systems (A/S) at the Institute of Technology in Architecture, ETH Z├╝rich, Switzerland.

In [1]: import esoreader

In [2]: eso = esoreader.read_from_path(r"C:\...\experiment01.eso")

In [3]: eso.find_variable('heating')
[('TimeStep', None, 'Heating:EnergyTransfer'),
  'Zone Ideal Loads Zone Total Heating Energy')]

In [4]: df = eso.to_frame('heating energy')

In [5]: df[:10]
0                            8596050.719384
1                            8672511.667988
2                            8737544.119096
3                            8799182.506582
4                            8862116.803218
5                            8928593.537248
6                            5296266.226576
7                                  0.000000
8                                  0.000002
9                                  0.000000

In [6]: df.plot()
Out[6]: <matplotlib.axes._subplots.AxesSubplot at 0x7854090>

In [7]: %matplotlib tk

In [8]: df.plot()
Out[8]: <matplotlib.axes._subplots.AxesSubplot at 0x7b66670>

Zone Ideal Loads Zone Total Heating Energy

Notice in the above example how the variable is matched by substring - you don’t have to specify the whole variable name. Each matching variable will show up in the resulting DataFrame with the key used as the column name - in this case ‘DEFAULT_ZONEZONEHVAC:IDEALLOADSAIRSYSTEM’.

Also, as this is an IPython session, I used the magic variable incantation %matplotlib tk to switch on the GUI loop that allowes plotting. You can choose another backend if you like, but I am pretty sure that tk should be available with your Python distribution.

An example with multiple columns:

In [1]: eso.find_variable('net thermal radiation heat gain energy')
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy'),
  'Surface Outside Face Net Thermal Radiation Heat Gain Energy')]

In [2]: df = eso.to_frame('net thermal radiation heat gain energy')

In [3]: df.plot()
Out[3]: <matplotlib.axes._subplots.AxesSubplot at 0xbd11150>

Net Thermal Radiation Heat Gain Energy

The key parameter to to_frame

You can use the key parameter to select a single column:

In [1]: df = eso.to_frame('net thermal radiation', key='DPVROOF:1157058.3')

In [2]: df[:10]
0    -8985934.016604
1    -8453530.628023
2    -7611418.498363
3    -6936246.291753
4    -6206109.857522
5    -5879653.262523
6    -5676601.453020
7    -5606988.050900
8    -5844912.195173
9    -4712551.701917

The index parameter to to_frame

You can use the index parameter to specify an index for the DataFrame. Since this is time-series data, a common pattern could be:

In [1]: hours_in_year = pd.date_range('2013-01-01', '2013-12-31 T23:00', freq='H')

In [2]: df = eso.to_frame('heating energy', index=hours_in_year)

In [3]: df[:10]
2013-01-01 00:00:00                            8596050.719384
2013-01-01 01:00:00                            8672511.667988
2013-01-01 02:00:00                            8737544.119096
2013-01-01 03:00:00                            8799182.506582
2013-01-01 04:00:00                            8862116.803218
2013-01-01 05:00:00                            8928593.537248
2013-01-01 06:00:00                            5296266.226576
2013-01-01 07:00:00                                  0.000000
2013-01-01 08:00:00                                  0.000002
2013-01-01 09:00:00                                  0.000000