In this Python tutorial, we will learn how to make a graph in Python TensorFlow. Also, we will cover the following topics.
- TensorFlow graph vs eager
- TensorFlow graph and session
- Tensorflow graph to keras model
- TensorFlow graph with loop
- Tensorflow with graph as default
- TensorFlow graph_replace
- TensorFlow has no attribute ‘reset_default_graph’
- Tensorflow graph input name
- TensorFlow graph.get_operation_by _name
- TensorFlow graph parallelization
- TensorFlow graph all variables
- Tensorflow add_graph
- TensorFlow split tensor
- TensorFlow graph finalize
- TensorFlow graph get all tensor names
- TensorFlow get input and output
- TensorFlow get input shape
- TensorFlow get weights
- TensorFlow get operation
- TensorFlow get layers
- TensorFlow graph list
Python TensorFlow Graph
- In Python TensorFlow, the graph specifies the nodes and an edge, while nodes take more tensors as inputs and generate a given tensor as an output.
- The edge is denoted as a tensor and it will generate a new tensor and it always depends on individual operations.
- If we are taking an example in Tensorflow then the graph is a backbone of the example and in simple words, we can say that graph is the combination of nodes that defines the operation in your model.
- Tensorflow graphs are like a data structure that stores a set of TensorFlow operation objects that indicates the unit of computation where objects will be a tensor that specifies the unit of data.
Example:
Now let’s see the example and check how to create and execute a graph in Python TensorFlow.
Source Code:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
var1 = 5
var2 = 8
result= tf.add(var1, var2, name='Add')
with tf.compat.v1.Session() as val:
new_output=val.run(result)
print(new_output)
In the following given code, we have imported the tensorflow library and tf.compat.v1.disable_eager_execution() function for running the session.
After creating the variable, we have used the basic operation method to display the graph that is tf.add() function and within this function, we have assigned the variables with the ‘name’ parameter.
Here is the Screenshot of the following given code.
Read: TensorFlow Tensor to NumPy
TensorFlow graph vs eager
- In this Program, we will discuss the difference between the graph and eager execution in Python TensorFlow.
- In Python TensorFlow, eager execution prepares the model and it executes an operation to add to a graph. While in the case of the graph it represents the data flow of the computation.
Example:
Let’s take an example and check the difference between the graph and eager execution in Python TensorFlow.
Source Code:
import tensorflow as tf
def eager_function(y):
new_output = y ** 2
print(new_output)
return new_output
y = tf.constant([13, 16, 18, 19, 21])
eager_function(y)
new_graph = tf.Graph()
with new_graph.as_default():
result = tf.constant(16)
print(result)
In the above code we have imported the TensorFlow library and then define the eager_function() and within this function, we have used the ** operator and return the value.
Here is the implementation of the following given code.
Read: TensorFlow get shape
TensorFlow graph and session
- In this section, we will discuss how to use the session in the graph Python TensorFlow.
- To perform this particular task, we are going to use the tf.compat.v1.disable_eager_execution() function and this function will create the session.
- Next, we will declare the variables and operate the function by using the tf.math.multiply() method and within this function, we have assigned the variables and name datatype as an argument.
- To create a session, we are going to use the tf.compat.v1.session() and to run the session we will used the val.run().
Example:
Let’s take an example and check how to use the session in the graph Python TensorFlow.
Source Code:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
x = 6
y = 7
result= tf.math.multiply(x, y, name='mult')
with tf.compat.v1.Session() as val:
new_output=val.run(result)
print(new_output)
In the following given code we have imported the TensorFlow library and then created the variables for operators value. After that, we have used the tf. math.multiply() function and within this function, we have passed the x and y variables with name parameter. Once you will run the session the output displays the value.
Here is the Screenshot of the following given code.
Read: Python TensorFlow reduce_sum
TensorFlow graph with loop
- In this section, we will discuss how to use the loop in Python TensorFlow Graph.
- To perform this particular task we are going to use the tf.while_loop() function and this function will check the condition if it is true then it will repeat the body.
Syntax:
Let’s have a look at the Syntax and understand the working of the tf.while_loop() function in Python TensorFlow.
tf.while_loop
(
cond,
body,
loop_vars,
shape_invariants=None,
parallel_iterations=10,
back_prop=True,
swap_memory=False,
maximum_iterations=None,
name=None
)
- It consists of a few parameters
- cond: This parameter indicates the execution condition of the loop.
- body: This parameter specifies the loop body.
- loop_vars: This is a tensor object and list of numpy array.
- shape_invariants: By default it takes none value and it specifies the shape of the variables.
- Parallel_iterations: By default it takes 10 value and it must be a positive integer.
- back_prop: It is an optional parameter and it supports for back propagation.
- name: This parameter indicates the name of the operation.
Example:
Let’s take an example and check how to use the loop in Python TensorFlow Graph.
Source Code:
import tensorflow as tf
tens = tf.constant(20)
tens1 = lambda tens: tf.less(tens, 10)
tens2 = lambda tens: (tf.add(tens, 1), )
result = tf.while_loop(tens1, tens2, [tens])
print(result)
In the following given code we have imported the TensorFlow library and then declare a variable by using the tf.constant() function. After that, we have used the tf.while_loop() function and within this function, we have passed the function of the tens. Once you will execute this code the Output displays the numpy array.
Here is the execution of the following given code.
Read: Python TensorFlow reduce_mean
TensorFlow graph_replace
- In this section, we will discuss how to replace the graph in Python TensorFlow.
- To perform this particular task, first, we will define the build() function and within this function, we have used the tf.placeholder() function for creating the tensor and mentioned the tensors in the tf.add() function.
- Next, we will use the tf.get_default_graph() function and this function will return the default graph session and then we will use the ge.graph_replace() function.
Syntax:
Here is the Syntax of tf.compat.v1.get_default_graph() function in Python TensorFlow.
tf.compat.v1.get_default_graph()
Example:
Let’s take an example and check how to replace the graph in Python TensorFlow.
Source Code:
import tensorflow as tf
import tensorflow.contrib.graph_editor as ge
tf.compat.v1.disable_eager_execution()
def build():
tens1 = tf.placeholder(dtype=tf.int32)
tens2 = tf.placeholder(dtype=tf.int32)
result= tf.add(tens1, tens2)
build()
new_result1 = tf.constant(2.0, shape=[2, 3])
new_result2 = tf.constant(4.0, shape=[2, 3])
tens1 = tf.get_default_graph().get_tensor_by_name("a:0")
tens2 = tf.get_default_graph().get_tensor_by_name("b:0")
result = tf.get_default_graph().get_tensor_by_name("c:0")
new_output = ge.graph_replace(result, {tens1: new_result1, tens2: new_result2})
with tf.compat.v1.Session() as sess:
print(sess.run(new_output))
In the above code we have imported the TensorFlow.contrib.graph_editor as get for replacing the graph and then we defined the function build() and within this function we have assigned the tensor by using the tf.placeholder() function.
After that, we have used the ge.graph_replace() function and within this function, we have passed the tensor with value as an argument. Once you will execute this code the output displays the values.
Here is the execution of the following given code.
Read: Python TensorFlow random uniform
TensorFlow has no attribute ‘reset_default_graph’
- In this Program, we will discuss the error TensorFlow has no attribute ‘reset_default_graph’ in Python TensorFlow.
- Basically, this error statement comes when the tf.reset_default_graph() function is not available in latest version of TensorFlow.
Example:
In this example, we have just used tf.reset_default_graph() function.
Note: If you are using the 1.x version in Tensorflow then this function will easily work in the example.
Now let’s see the solution to this error.
Solution:
To perform this particular task we are going to use the updated function tf.compat.v1.reset_default_graph(). This function only works in latest version 2.8x.
Source Code:
import tensorflow as tf
result=tf.compat.v1.reset_default_graph()
print(result)
Here is the implementation of the following given code.
Read: Python TensorFlow one_hot
TensorFlow graph all variables
- In this Program, we will discuss how to get all the variables in TensorFlow Graph.
- To do this task, first, we will operate some operations by using the tf.constant() and tf.variable() function and within this function, we are going to assign the value as well as the name parameter.
- Next, to get all the variables we are going to use the tf.compat.v1.get_default_graph() function, and this function will help the user to return the variable names from the graph.
Syntax:
Let’s have a look at the Syntax and understand the working of tf.compat.v1.get_default_graph() function in Python TensorFlow.
tf.compat.v1.get_default_graph()
Example:
Let’s take an example and check how to get all the variables in the TensorFlow graph.
Source Code:
import tensorflow as tf
tens1 = tf.constant(2.6, name='const_tens1')
tens2 = tf.Variable(4.8, name='variable_tens2')
tens3 = tf.add(tens1, tens2, name='new_add')
tens4 = tf.multiply(tens3, tens1, name='new_mult')
for new_op in tf.compat.v1.get_default_graph().get_operations():
print(str(new_op.name))
In the following given code we have imported the TensorFlow library and created a variable by using the tf.constant() and tf.variable() function for getting all the variables from TensorFlow.
Here is the Screenshot of the following given code
Read: Python TensorFlow expand_dims
TensorFlow get input shape
- In this section, we will discuss how to get the input shape in Python TensorFlow.
- To do this task, we are going to use the cifar10() load dataset to get the input shape from the tensor.
- In Python TensorFlow, this method is used to load the CIFAR10 dataset.
Syntax:
Here is the Syntax of tf.Keras.datasets.cifar10.load_data()
tf.keras.datasets.cifar10.load_data()
Example:
Let’s take an example and check how to get the input shape in TensorFlow.
Source Code:
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.datasets import cifar10
new_dt = cifar10.load_data()
(input_train, target_train), (input_test, target_test) = new_dt
print("Input shape of trainning data:",input_train.shape)
print("Input shape of testing data:",input_test.shape)
In the above code, we have imported the cifar10 module by using Keras.datasets() library, and then we have used the input_train and input_test image data. Once you will execute this code the output displays the input name.
Here is the execution of the following given code.
Read: TensorFlow Sparse Tensor + Examples
TensorFlow get weights
- In this section, we will discuss how to get the weights in Python Tensorflow.
- To perform this particular task, we are going to use the concept of tf.keras.layers.Dense() function and within this function, we are going to pass the activation value to it.
Syntax:
Let’s have a look at the Syntax and understand the working of tf.Keras.layers.Dense() function.
tf.keras.layers.Dense
(
units,
activation=None,
use_bias=true,
kernel_initializer='glorat_uniform',
bias_intializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None
)
- It consists of a few parameters
- units: This parameter indicates the given dimension of the output shape.
- activation: This parameter specifies if you do not mention the value no activation is applied.
- use_bias: By default it takes true value and it specifies the bias vector.
- Kernel_intializer: It defines the Kernel weights matrix and by default it takes ‘glorat_uniform’.
- bias_regularizer: This parameter indicates the bias vector.
- activity_regulaeizer: This parameter specifies the output of the given layer.
- kernel_constraint: By default it takes none value and it specifies the weight of the matrix.
Example:
Let’s take an example and check how to get the weights in Python TensorFlow.
Source Code:
import tensorflow as tf
new_model = tf.keras.Sequential([
tf.keras.layers.Dense(2, activation="relu"),
tf.keras.layers.Dense(3, activation="tanh"),
tf.keras.layers.Dense(4),
])
new_input = tf.random.normal((2,3))
output = new_model(new_input)
print(new_model.layers[0].weights)
In the following given code we have imported the TensorFlow library and then used the tf.Keras.layers.Dense() function and within this function, we have assigned the value with activation values as ‘relu’ and ‘tanh’.
After that, we have applied the tf.random.normal() function and inside this function we have mentioned the shape of values.
Here is the implementation of the following given code.
Read: Python TensorFlow truncated normal
TensorFlow get operation
- In this section, we will discuss how to define the operation in TensorFlow graph.
- To do this task we are going to use the tf.math.divide() operation and get the values from the given tensor.
Syntax:
Here is the Syntax of tf.math.divide() function in Python TensorFlow.
tf.math.divide
(
x,
y,
name=None
)
- It consists of a few parameters
- x: This parameter indicates the input tensor.
- name: This parameter specifies the name of the operation.
Example:
Let’s take an example and check how to use the tf.math.divide() operation in Tensor.
Source Code:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
tens1 = 6
tens2 = 7
result= tf.math.divide(tens1, tens2, name='divide')
with tf.compat.v1.Session() as val:
new_output=val.run(result)
print(new_output)
In the following given code, we have used the tf.math.divide() function and within this function, we have assigned the tensor as an argument. After that, we have used the tf.compat.v1.Session() function for running the session.
Here is the execution of the following given code.
Read: Convert list to tensor TensorFlow
TensorFlow get layers
- In this section, we will discuss how to get the layers in Python TensorFlow.
- To perform this particular task, we are going to use the tf.keras.layers.Dense() function and within this function, we are going to pass the activation value to it.
Syntax:
Here is the Syntax of tf.Keras.layers.Dense() function in Python.
tf.keras.layers.Dense
(
units,
activation=None,
use_bias=true,
kernel_initializer='glorat_uniform',
bias_intializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None
)
- It consists of a few parameter
- units: This parameter indicates the given dimension of the output shape.
- activation: This parameter specifies if you do not mention the value no activation is applied.
- use_bias: By default it takes true value and it specifies the bias vector.
- Kernel_intializer: It defines the Kernel weights matrix and by default it takes ‘glorat_uniform’.
- bias_regularizer: This parameter indicates the bias vector.
- activity_regulaeizer: This parameter specifies the output of the given layer.
- kernel_constraint: By default it takes none value and it specifies the weight of the matrix.
Example:
Let’s take an example and check how to get the layers in Python TensorFlow.
Source Code:
import numpy
import tensorflow as tf
new_val = numpy.array([[[4., 5., 6.], [16., 25., 31.]], [[28., 99., 16.], [25., 81., 55.]]])
tens1 = 3
tens2 = 2
model = tf.keras.Sequential([
tf.keras.layers.Dense(tens2 * tens1, input_shape=(tens2, tens1), activation='relu'),
tf.keras.layers.Dense(45, input_shape=(tens2, tens1), activation='sigmoid'),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(3, input_shape=(0, 1), activation='softmax')
])
model.build(input_shape=(tens2, tens1))
result=model(new_val)
print(result)
In the following given code, we have created an array by using the numpy.array() function and then we declared two variables that specify the input shape.
After that, we have applied the tf.Keras.layers.Dense() function and within this function, we have multiplied the given tensor and set the value of activation as relu.
Here is the Screenshot of the following given code.
As you can see in the Screenshot the Output displays the sequential layer.
Read: Tensorflow iterate over tensor
TensorFlow graph list
- In this section, we will discuss how to get the graph list in Python TensorFlow.
- To perform this particular task, we are going to use the tf.compat.v1.get_default_graph() function and this function is used to return the graph in the output tensor.
Syntax:
Here is the Syntax of tf.compat.v1.get_default_graph() function in Python TensorFlow.
tf.compat.v1.get_default_graph()
Example:
Let’s take an example and check how to get the graph list in Python TensorFlow.
Source Code:
import tensorflow as tf
new_tens = tf.Variable(3)
new_tens2 = tf.Variable(8)
new_tens3 = tf.Variable(2)
result = (new_tens + new_tens2) * new_tens3
for result in tf.compat.v1.get_default_graph().get_operations():
print (result.name)
In the following given code, we have imported the TensorFlow library and then created the operation by using the tf.variable() function. After that, we have used the tf.compat.v1.get_default_graph() function and store the result variable as an argument.
You can refer to the below Screenshot.
Read: Gradient descent optimizer TensorFlow
TensorFlow split tensor
- In this section, we will discuss how to split the tensor in Python TensorFlow.
- To perform this particular task, we are going to use the tf.split() function and this function is used to divide a tensor value into a sub tensor.
Syntax:
Let’s have a look at the Syntax and understand the working of the tf.split() function in Python.
tf.split
(
value,
num_or_size_splits,
axis=0,
num=None,
name='split'
)
- It consists of a few parameters
- value: This parameter indicates the tensor which we want to split.
- num_or_size_splits: This parameter specifies the number of splits along with axis.
- axis: By default it takes 0 value and it defines the dimension and the range will be [- rank(value), rank(value)).
- name: It is an optional parameter and it specifies the name of the operation.
Example:
Let’s take an example and check how to split the tensor in Python TensorFlow.
Source Code:
import tensorflow as tf
tensor = tf.Variable(tf.random.uniform([6, 30], -1, 1))
result = tf.split(tensor,2,1)
tf.shape(result).numpy()
In the above code we have imported the TensorFlow library and then used the tf.variable() function and within this function, we have mentioned the tf.random.uniform() function and then use the tf.split() function and it will split the tensor.
Here is the Screenshot of the following given code.
TensorFlow get input and output
- In this section, we will discuss how to get the name of input and output tensor in Python TensorFlow.
- To perform this particular task, we are going to use the tf.keras.layers.Dense() function and within this function, we are going to set the activation and input tensor value.
- Next, we are going to use the tf.keras.model() function and this function will help the user to get the input and output name of the input tensor.
Syntax:
Let’s have a look at the Syntax and understand the working of tf.Keras.model() function.
tf.Keras.model
(
*args,
**kwargs
)
Example:
import tensorflow as tf
new_input = tf.keras.layers.Input(shape=(3000,))
tensor = tf.keras.layers.Dense(324, activation="relu")(new_input)
new_output = tf.keras.layers.Dense(20, activation="sigmoid")(tensor)
model = tf.keras.Model(inputs=new_input, outputs=new_output)
print('new_input_tensor:', new_input.name)
print('new_output_tensor:', new_output.name)
In this example, we have imported the TensorFlow library and then used the tf.Keras.layers.Input() function and within this function, we have assigned the activation and input tensor value.
Here is the execution of the following given code.
TensorFlow graph.get_operation_by _name
- In this section, we will discuss how to use the get_operation_by_name() function in Python TensorFlow.
- To do this task first we will define a function ‘get_graph’ with new_graph.as_default() function and within this function we are going to assign the tf.constant() function as an operator.
- Next we are going to use the tf.session() function for running the session and assign the sess.graph.get_operation_by_name() method.
Note: In the given example we have used the Tensorflow 1. x version.
Syntax:
Here is the Syntax of tf.get_operation_by_name() function.
tf.get_operation_by_name()
Example:
Let’s take an example and check how to use the get_operation_by_name() function in Python TensorFlow.
Source Code:
import tensorflow as tf
def get_graph():
new_graph = tf.Graph()
with new_graph.as_default():
tens = tf.constant(7.0, name='tensor')
return new_graph, tens
if __name__ == '__main__':
new_graph, tens = get_graph()
with tf.Session(graph=new_graph) as sess:
print(sess.run(tens))
a = sess.graph.get_operation_by_name('tensor')
print(sess.run(tens))
Here is the implementation of the following given code.
TensorFlow graph parallelization
- In this section, we will discuss how to parallelize a TensorFlow in Python TensorFlow.
- To perform this particular task, we are going to use the tf.parallel_stack() function and this function will help the user to stacks a list of ranks into one rank.
Syntax:
Here is the Syntax of tf.parallel_stack() function in Python TensorFlow.
tf.parallel_stack
(
values,
name='parallel_stack'
)
- It consists of a few parameters.
- values: This parameter indicates the list of input tensor and the given shape and datatype.
- name: By default it takes parallel_stack parameter and it is an optional parameter that defines the name of the operation.
Example:
Let’s take an example and check how to parallelize a Tensor in TensorFlow Python.
Source Code:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
tens1 = tf.constant([16, 27])
tens2 = tf.constant([34, 56])
tens3 = tf.constant([98, 77])
result=tf.parallel_stack([tens1, tens2, tens3])
with tf.compat.v1.Session() as val:
new_output=val.run(result)
print(new_output)
In the above code, we have imported the TensorFlow library and then we have created the tensor by using the tf.constant() function and then used the tf.parallel_stack() function, and within this function, we have passed the tensors as an argument.
After that, we have created a session by using the tf.compat.v1.Session() function and assigned the ‘result’ variable as an argument, Once you will execute this code the Output displays the parallelized value.
Here is the implementation of the following given code.
TensorFlow write graph to tensorboard
- In this section, we will discuss how to write a graph to tensorboard in Python TensorFlow.
- To do this task, we are going to use the tf.compat.v1.get_default_graph() function for definning the graph.
- Next, we will apply the graphpb.txt() file in file mode as f.write(graphpb_txt). Once you will execute this code the Output displays the random value.
Example:
Let’s take an example and check how to write a graph to tensorboard in Python TensorFlow.
Source Code:
import tensorflow as tf
graph_def = tf.compat.v1.get_default_graph().as_graph_def()
graphpb_txt = str(graph_def)
with open('graphpb.txt', 'w') as f: f.write(graphpb_txt)
print(graphpb_txt)
Here is the implementation of the following given code.
Tensorflow add_graph
- In this section, we will discuss how to add graphs in Python TensorFlow.
- To perform this particular task, we are going to use the tf.summary.create_file_writer() function and this function is used to declare a summary file writer.
Syntax:
Let’s have a look at the Syntax and understand the working of the tf.summary.create_file_writer() function in Python TensorFlow.
tf.summary.create_file_writer
(
logdir,
max_queue=None,
flush_millis=None,
filename_suffix=None,
name=None,
experimental_trackable=False,
)
- It consists of a few parameters.
- logdir: This parameter indicates the directory in which we can easily write the file.
- max_queue: By default it takes none value and it specifies the maximum value of summaries to keep in a queue.
- flush_millis: By default it takes 120,000 value and it defines the maximum interval between flushes.
- filename_suffix: It is an optional parameter and by default it takes .v2 value and it defines the event file name.
- name: This parameter indicates the name of the operation.
- experimental_trackable: By default it takes false value and it will return the trackable resource.
Example:
Let’s take an example and check how to add graphs in Python TensorFlow.
Source Code:
import tensorflow as tf
import numpy as np
new_tens = tf.summary.create_file_writer("/tmp/tf2_summary_example")
for step in range(20):
result = np.random.normal(loc=step, scale=1, size=[100])
with new_tens.as_default(step=step):
print(tf.summary.histogram(name='distribution', data=result))
In the following given code we have imported the TensorFlow and numpy library and then use the function tf.summary.create_file_writer() and within this function, we have attached the sample summary file.
After that, we have used the for loop method in which we have applied the np.random.normal() function and within this function, we specified the size and scale value.
Here is the implementation of the following given code.
Tensorflow graph input name
- In this section, we will discuss how to get the input name from the TensorFlow graph in Python.
- To perform this particular task, we are going to use the tf.keras.layers.Dense() function and within this function we are going to set the activation and input tensor value.
- Next, we are going to use the tf.keras.model() function and this function will help the user to get the input and output name of the input tensor.
Example:
import tensorflow as tf
new_tensor = tf.keras.layers.Input(shape=(2000,))
tensor = tf.keras.layers.Dense(324, activation="relu")(new_tensor)
new_output = tf.keras.layers.Dense(20, activation="sigmoid")(tensor)
model = tf.keras.Model(inputs=new_tensor, outputs=new_output)
print('new_input_tensor:', new_tensor.name)
Here is the implementation of the following given code.
So, in this Python tutorial, we have learned how to make a graph in Python TensorFlow. Additionally, we have also covered the following topics.
- TensorFlow graph vs eager
- TensorFlow graph and session
- Tensorflow graph to keras model
- TensorFlow graph with loop
- Tensorflow with graph as default
- TensorFlow graph_replace
- TensorFlow has no attribute ‘reset_default_graph’
- Tensorflow graph input name
- TensorFlow graph.get_operation_by _name
- TensorFlow graph parallelization
- TensorFlow graph all variables
- Tensorflow add_graph
- TensorFlow split tensor
- TensorFlow graph finalize
- TensorFlow graph get all tensor names
- TensorFlow get input and output
- TensorFlow get input shape
- TensorFlow get weights
- TensorFlow get operation
- TensorFlow get layers
- TensorFlow graph list
I am Bijay Kumar, a Microsoft MVP in SharePoint. Apart from SharePoint, I started working on Python, Machine learning, and artificial intelligence for the last 5 years. During this time I got expertise in various Python libraries also like Tkinter, Pandas, NumPy, Turtle, Django, Matplotlib, Tensorflow, Scipy, Scikit-Learn, etc… for various clients in the United States, Canada, the United Kingdom, Australia, New Zealand, etc. Check out my profile.