Search This Blog

Friday, October 22, 2010

Antenna Deployment Subsystem Design

Back after a long time....

Here is what I designed for the antenna deployment subsystem which happens to be a preliminary stage of Studsat team recruitment in my college....

                Antenna Deployment Subsystem for a Cube Sat

Abstract:
          The core idea of this document is the proposal of an Antenna deployment system for a cube sat. Since my knowledge in the field of antennas is very limited, I have tried to bring up some concepts for the purpose of antenna deployment and you may feel it a bit impractical to realize. I have tried to focus on two major stages for this proposal. The first would be a mere illustration of the various deployment mechanisms in practice and their relevance for my proposal. The next stage will be a detailed illustration for developing the system and the design is as per my perspective of the ecology of the satellite and the launch conditions.
 Summary of the intended plan:
          To start off with the first phase, are several constraints like space, mass, power consumption and reliability. To overcome all these constraints, we have to design the antenna and the related subsystems to have minimum mass and occupy less volume. Also they must be able to withstand the high acceleration of the launch vehicle and the harsh conditions of the space. Now the actuators which cause the deployment of the antennas to its fullest spread must be able to mechanically hold the antenna in spite of the high G's experienced during takeoff and also must not consume any energy during this phase as some launch specifications require complete electrical shutdown.
            One type of actuator is the magnetic actuator which is a very simple concept and easy to implement. It consists of a permanent magnet holding the antenna strips during launch owing to zero electricity requirements and as soon as the satellite is thrown off the payload capsule, an electric current surges which in turn generates the required magnetic field to oppose that of the permanent magnet. Thus there would be an easy deployment of the antennas which comply with all the pre-launch requirements. This is also pretty good method which provides the feature of reusability during testing phase. Coming to the second type of actuator, there is the one-time-use melting wire actuator. This happens to be much more light compared to the magnetic actuator. Tests have proved that materials like nichrome which in its wire form of approximately 2mm diameter and 4mm length can melt down and break when voltages as low as 4V is provided with a current as less as 0.9A. In this type, a nylon wire holds up the antenna in a stowed position. The Nichrome coil is tightly wound across the nylon wire and as soon as the satellite is launched into space, a high power is delivered to the coil to melt down the wire and cut the nylon wire so that the antenna can resume its operations. The antennas would swing back to its actual position once the wire is cut due to the elastic inertial forces.
            Now moving onto the second phase, I would wish to express my design for the deployment system. Now when compared to the above two methods of antenna deployment, the melting wire actuator seems to be more suitable for the reason that it takes up less mass and volume and the intended target can be achieved at a lower electrical effort. For the design of this subsystem, I would wish to take into consideration the following factors. The first being the power consumption, the second being the size of the actuating circuit and the last being the method of actuation and initiation for other subsystems to start their functionalities as the antenna has been deployed.
            Let me assume that there is a separate subsystem control circuit allocated for just the antenna deployment. Taking power consumption into consideration, this subsystem needs the power only after the ejection of the satellite from the payload capsule. The main source of power is the energy harvested from the onboard solar cells and this might fetch up to 30mW/cm2 and hence a low power device would be suitable for this purpose. A microcontroller like MSP430 from TI would be optimal as it has higher performance, lower space (14 Pin SMD devices – TSSOP being the smallest) and suitable for small applications. This particular microcontroller works at 3.3V logic and can operate in voltages as low as 1.8V. The sleep mode current consumed by it is as low as 40nA and this would be optimal for the entire satellite's power requirements because MSP430 in my design would be inactive for the rest of the time after deployment and it is essential to ensure that it consumes almost no power. I would be using MSP430F2013 version of MSP430 as I have a practical experience in handling it on a debugger provided by TI called EZ430-F2013. Now coming to the design part,
1)                         Since no other system will be online in the satellite, it is up to the MSP430 to deploy the antenna and initiate all other systems. As there is no problem with the power requirements in the perspective of MSP430, the device will be instantly ready. The controller is assumed to derive power directly from a definite array of solar cells which ensures the peak voltage is below a specified limit and since the device internally has a regulator, there is no need for voltage regulation. We would generate a certain delay in the controller using 16bit timer before the controller could actually start its deployment work just to ensure that the solar cells are ready to output maximum power.
2)      After the controller, the next part of the circuit is the relay. I would wish to use a small MOSFET as it reduces the mass of the module. An 8V, 1A MOSFET would be sufficient to serve the purpose as the melting of the nichrome wire requires less than 1A at 5V.
3)      Now, as soon as the controller finishes a certain delay (delay is determined by trial and error method after simulating) it is made to actuate the MOSFET. The controller has one 8bit highly multiplexed port and definitely one pin can drive the gate of the MOSFET which would turn it on. So here I would be using a MOSFET to perform the job of switching or controlling the power flow between the solar cells and the nichrome wire. Since none of the device will be on till the antenna is deployed, we can ensure that maximum power is delivered to the actuator to perform the required melting of the wire.
4)      We shall ensure that the controller will drive the MOSFET for a certain duration which is known during testing and hence ensure that the antennas are successfully deployed. Once this job is done, the MSP430 now has to ensure that all other subsystems of the satellite are turned on.
5)      This process of actuation can be controlled via 7 other pins of MSP430. Another option is to have a common one time operable switch which can be activated by the MSP430 once the antenna has been deployed which in turn act as a chip enable for all other circuits onboard. After the deployment is successful, there is no further work for the deployment system and hence the MSP430 can be put into sleep mode which hardly consumes any power. The MOSFET also doesn't drive any power as the nichrome coil is melted and there is no complete electrical path.
In this way, an effective deployment system can be achieved at low power consumption and also less mass to transport the device.

--
Nagaraja

Wednesday, September 1, 2010

Hyper-V

Hello guys, it is time for some new stuff to be posted...

    Today i will be telling you what i know about Hyper-V. Hyper-V is in its full form know as Hypervisor. It is a software which comes inbuilt with Windows Server 2008 operating system, a huge product in the server line of Microsoft which happens to be constructed of course on NT platform..Far more evolved than the primitive Windows Server 2003...So if you were a Windows Server 2000 user, you might not have noticed much changes when you migrated to Windows Server 2003 but it is not the case in Windows Server 2008.....The entire architecture has evolved to a while new level. Talking something about Windows Server 2008, you have nearly 5 major editions some of them like standard, datacenter, enterprise, web edition, Itanium edition.....Hyper V comes in as a pre-installed feature in these versions...There are some versions where you do not have this Hyper V feature ad thus costs less...So depending on your utility, you can purchase the required version of Windows.
    Getting back to Hyper V, as i said earlier, it provides support for a virtualised environment and hence create a virtualisation server. Some of you might be thinking as to what is the need for having a virtual server right....
    Well let me explain you this with an example. Suppose you run a company wherein you have 3 database servers with say 250GB hard drive and 1GB RAM on each server. You plan to upgrade all of these servers to say 3GB RAM and 500GB hard drive capacity. Now you have two options. The first option is the traditional one. Just upgrade all the systems to whatever extent you have planned to. Also note that by doing this you must also improve the cooling mechanisms, system hardware maintenance and space utilisation for the server that you maintain because maintenance overhead comes as a free gift package with the huge benefits a server can provide. Now there is another thing that you will have to consider in mind. Will your servers always use 3GB of RAM or is it only at the peak hours its utilisation is 3GB????Well you cannot supply less RAM and spoil your business at peak hours also you "need not" invest on providing more RAM to servers just for serving the need during peak hours when you have a better option like Hyper V.
   I will explain you how to realise the above 3 servers using a Hyper V virtual server. Microsoft has given more importance to development of virtual servers because of many reasons some of them being the above mentioned. Just have a server with say 1.8 - 2 TB of hard drive space(in these days, hard drive space is not a constraint as they are available very cheaply.....I mean you get 1TB Seagate hard drive in less than 50$ in the Indian markets). You can choose to install RAM of say 10 GB. Why 10 GB????? Well lets say you have 3 servers and at peak time you will be using 3GB on each server totalling to around 9GB of peak time RAM usage...also the server OS needs to live......Well here is another interesting thing......Windows Server 2008 comes with an install option called the Server core installation which is a part of installation choice which the server administrator can make. You cannot get the server core part separately but is an installation option in every version of OS that you buy....Whats new in this????Well lets just say that you don't have the GUI anymore only the age old yet most powerful command prompt....You will only see a blank window with only the command prompt console for your operations....It supports all the features of the server operating system...just comes without the GUI........Unless you are a command line geek, don't use this option....Well you do have an advantage in this...It uses less RAM i.e 384 MB as quoted...Well this is an advantage in fact....
Now that you have installed Windows Server 2008 or R2 which is the second release of Windows Server 2008...the next thing you will have to do is to create virtual machines which happen to be your database servers...You will create 3 such machines and the creation of these virtual machines is exactly similar to installing a new OS....Hyper V supports up to 64 logical CPU's per virtual machine which in fact is quite large support. Now you will have an option to create hard disks where you can create either fixed size wherein you will fix the size of the logical hard disk for the virtual machine ( This amount of hard disk space that you allocate will not be available for any of your work) or be smart and use dynamic hard disks which dynamically increases its size as and when data is added onto the logical hard drive of the virtual machine. You will also be able to reserve logical CPU's to certain virtual machines..Say you have one database server which requires huge processor support compared to other servers, then you can allocate more number of logical CPU's for that particular server....As usual there is network load balancing support that  is present....
Not only this, in the upcoming SP! release of Windows server 2008 R2, there is a feature called dynamic RAM allocation wherein RAM allocation to virtual machines can be done dynamically...This is a huge benefit for that peak hour problem which i told you about...You can reserve a starting nominal RAM say 1.5 GB to start with...Then you will have 4.5 GB allocated for all your 3 servers and will be present there even if the servers do not ned so much of memory( Imagine in non peak hours )....Now you can instruct Hyper V to dynamically increase the RAM from 1.5 GB upto 3  GB as and when the virtual machine's RAM usage crosses a particular percentage....Say you set up a virtual machine with a nominal or fixed RAM of 1.5 GB and provide an instruction to Hyper V stating that the virtual machine can be given a max of 3 GB RAM whenever the RAM free space of that particular VM goes below say 20%. so whenever the VM's RAM usage crosses 80% Hyper V ensures that there is always 20% free RAM space available for that VM...it does so by seeping in some RAM to the virtual machine thereby increasing the total amount of RAM and hence the total free RAM space is maintained constant at 20%...this goes on til the total RAM allocated to that VM reaches 3GB( as in this example) thereafter, it either shuts down or asks you to increase the allocated RAM. I am telling you this because you have a great advantage here...you can not only run 3 database servers simultaneously but can also add on some more VM's onto the virtual server so that you can get more benefits from a single server....Microsoft has also given importance on graphical quality...Generally you cannot see aero theme working on a VM. but in SP1 the new feature called Remote FX enables you to have this high graphical capabilities.....
Now if you wish to move the VM from one virtual server to another, Windows Server 2008 R2 provides what is called as clustered media sharing services...This allows you to transfer VM's from one server to another without any drops in connections during the migration...and yes, this feature is called as "Live Migration"...wherein you migrate VM's from one server to another without drops in the connections that exists to the VM's in the network....

There is a lot more to know about in Hyper V...i would leave that to you...Refer the book on Windows Server 2008 which happens to be the first book in my collection...You will find this in the Books page of my blog.....Happy reading.....

Monday, August 30, 2010

Intel to Acquire Infineon's Wireless Solutions Business

NEWS HIGHLIGHTS

Intel to purchase Infineon's Wireless Solutions Business, called WLS,
in a cash transaction valued at approximately $1.4 billion. The deal
is expected to close in the first quarter of 2011.
WLS sale enables Infineon to expand leading position in markets for
automotive, industry and security technologies.
WLS will operate as a standalone business. Intel is committed to
serving WLS' existing customers, including support for ARM-based
platforms.
The acquisition expands Intel's current Wi-Fi and 4G WiMAX offerings
to include Infineon's 3G capabilities and supports Intel's plans to
accelerate LTE. The acquired technology will be used in Intel® Core
processor-based laptops, and myriad of Intel® Atom™ processor-based
devices, including smartphones, netbooks, tablets and embedded
computers.
The deal aligns with Internet connectivity pillar of Intel's computing strategy.

About Infineon
Infineon Technologies AG, Neubiberg, Germany, offers semiconductor and
system solutions addressing three central challenges to modern
society: energy efficiency, mobility, and security. In the 2009 fiscal
year (ending September), the company reported sales of Euro 3.03
billion with approximately 25,650 employees worldwide. With a global
presence, Infineon operates through its subsidiaries in the U.S. from
Milpitas, CA, in the Asia-Pacific region from Singapore, and in Japan
from Tokyo. Infineon is listed on the Frankfurt Stock Exchange (ticker
symbol: IFX) and in the USA on the over-the-counter market OTCQX
International Premier (ticker symbol: IFNNY).

Intel, the world's largest chip maker, is also a leading manufacturer
of computer, networking and communications products. Additional
information about Intel is available at www.intel.com/pressroom.

--
Nagaraja

Saturday, August 28, 2010

Starting with Java

Hello guys,

In this blog i will be explaining you about the ways in which you can start off programming in Java...Very soon I will be embedding a video more like a screencast teaching you about using Java.

Well generally whenever you start off with some programming language, the main thought that comes to your mind is "How do i compile the code and where do I deploy????" right....This post will teach you just the basics of Java and all you need to get started with some serious coding.....

I assume that the audience or the readers of this post knows something about programming languages and for a clear understanding of all the concepts I suggest you to read some books on C++ which are present in my books page...

Clarifying all these, let me start off with some history...Java was created by James Gosling and his team in around 1990 in a view to create a platform independent code i.e he was focusing mainly to develop codes for simple toasters and washing machines and ensure that the code need not be changed every time it is targeted to machines of different companies.....What i mean to say is this. Assume a scenario wherein you need to write a washing machine code for both Siemens and LG..In case of primitive languages like C and C++, you would have to write platform or machine specific codes for both Siemens and LG washing machines as the deployment platform are totally different...In case of Java this problem is eliminated....Programmers code a single piece of software and just deploy it on any machine....Confused????? Well don't be......I told that a single Java code can run on any platform....But for the code to run, there is a requirement of a "platform specific layer" or an environment for the Java code to run called JVM or the Java Virtual Machine...Some of you might be thinking what difference it would make if you need a platform specific software again to run......But here is the fact.....In case of C and C++, you will have to recode the entire software as per the specifications of the target platform, compile it wholly and then deploy...In case of Java, this is replaced by a simpler technique. All java code written is compiled by the Java compiler into a Machine independent byte code file called as the .class file. This file an run on any JVM's and hence it is only the JVM's that has to change from platform to platform and not the written code.....This actually reduced the programming burden of platform dependancies...

Now, to start off......
You must have a Java Developer Kit (JDK) installed in your system in order to compile your program. Let me just brief out some steps you will have to follow to program a Java code to say " Hello Viewers ".. Simple enough to make you learn a lot of new concepts...There are two ways by which you can do this. If you are a GUI fan, you can use any IDE's like Eclipse or Netbeans which are free softwares available on the internet...The other is the all-powerful command line execution and compilation which actually is good for many starters as it gives you the core fundamentals of the working of Java codes.
JDK is a freely available software of SUN microsystems. Just download the kit and install. You have an option to chose the JDK package that you wish to download. Generally JDK packages come as Java 2 SE, Java 2 EE, Java 2 SE with Netbeans and many more. If you wish to stick on with just the command line compilation as I told earlier, just download the J2SE package. The most recent at the time of writing this post is the J2SE 1.6 update 21. I would suggest you to download the J2SE package with netbeans as it will be helpful to ease your work when you start developing complex software structures.
After successfully installing the software you have the JDK ready to use with some specifications and also the GUI Netbeans IDE which is pretty decent to operate.

First let me go with the usual command line coding of Java.

Let me show you the code first and then brief you with some other details about compiling. The code is pretty simple.
-------------------------------------------------
class First
{
public static void main(String s[])
{
System.out.println("Hello whoever you are");
}
}
-----------------------------------------------

Thats it...you have the code ready....Just save the code in a file named First.java only....no other name.....I will tell you why.....In java, there is a rule wherein you have to name the file with the same name as that of a top level public access class which it has or can be any one of the top level class if there are no top level public class. What i mean by top level class is this

class TopLevel { class InnerClass {} } This is just a dummy class and the class names are self explanatory of what i mean. So for the file containing this class, I should name the file as TopLevel.java only. Also note the case of the name. toplevel.java would compile but will never run...So take care....Never change the case of the file name. To make it more clear,

class First { class Second {} }
class Third { class Fourth {} }

for the above code, i can name the file as either First.java or Third.java.

class First { class Second {} }
class Third { public class Fourth {} }
public class Fifth {}

for this, the only file name can be Fifth.java

public class First{}
public class Second{} will generate an error as there cannot be 2 top level public classes in the same file.

Now that the naming problem is solved, it is upto the compilation process. Here, you will need to focus a bit. If you want the easy method, just skip this bullet and skip to bullet 2.
1) This is the tough way which will(and definitely) make your life henceforth.
I assume that you know how to use command prompt of Windows. If not type cmd in Run of your start menu. If you want to open an executable file in a folder, say you want to open speed.exe which is the application starter of NFS MostWanted game...(My favourite racing game)..you would have to go to the specific folder where you have installed NFS and then run the appplication using the command speed.exe right???? In another scenario, you can start notepad from any random folder just by typing notepad.exe in command prompt right?????How is it???? Well there is something called as environment variables which actually control this behavior. There is a variable called "path" which you will have to set as you have chosen the tougher path i.e bullet 1 is the tough path.....lol.....OK....Now right click on your My Computer icon and select properties.
goto advanced -> Environment Variables
Now i will not bother you too much with the details....Just contact me if you want any further details regarding this..I will surely get back to you with satisfying answers.
Now just look into the user variables part which happens to be the first window block on the screen.. Browse through if you can find a "path" variable. If found, click edit.....else create a new path by selecting new tab. Type the variable's name as "path"
Now if you already have a path variable defined, " ; " is used as a delimiter...So goto the end of the variable's definition which happens to be the second text field and add a " ; " and then type your JDK's bin directory's entire path into the text field. If you are creating a new path variable, just type the bin directory's path.
Click ok and exit from the window...
Now if you type javac in any directory in command prompt, it must display a list of switches which you can use with javac. If it doesn't work contact me via email....ill help you out with that......
2) This is the less troubling method but you will have to keep doing this over and over whenever you want to compile a java code. Just goto the directory in which you have installed java's JDK. Generally it will be the \Program Files\Java\(JDK Version)\ Now goto the bin directory inside this folder. All this is to be done using cd of command prompt. A hectic way but less thoughtful.

Now that you have direct access to javac and java which are the java compilers and interpreters, the compilation is pretty simple.
to compile, javac First.java or javac .java

to run the code, java First or java don't include any extensions for these files during running phase. You can see the output if you have developed the code correctly or some compilation errors and runtime exceptions may be thrown if there exists any errors.

----------------------------------------------------------------------------------------------------------------
Now let me describe this process by using a GUI like Netbeans which generally comes in the package which i mentioned earlier. Just follow these steps which I will be explaining...
1) Start Netbeans ( I assume that you have downloaded the exact package which i said earlier as it contains Netbeans 6.9.1 integrated with JDK 6 Update 21....Moreover the general process of execution should not change....)
2) goto file -> New Project -> Select Java and chose Java Application -> Click Next -> Type a project name and set the location where to save the project -> Ensure Create Main Class checkbox is ticked -> Click finish
3) A Main.java is preloaded. Just type the following code in the public static void main(String args[]) { } block only.....

System.out.println("Hello");

Thatz it....you are done with the coding....just ensure that the statement is inside the main() block itself...If you cannot do this, please quit java......
To run this, goto Run -> Run or just press F6


This does it....My job of introducing you to Java is done fairly...Happy coding...Please leave you feedback via comments or please contact me via my mail nagaraja.r@live.in for any queries regardin Java for an authorised answer.







Monday, August 23, 2010

Infosys Project...

Hello guys, here is one video more like a screen cast........Me and my team mates recently did a project where we developed a software called Medical Record Maintenance System which ofcourse as the name tells does the job of maintaining patient's records....The interesting thing is that we designed a GUI with mouse capabilities using only C programming language.....Also the database interfaced is SQL server 2008 with the help of some third party driver softwares....... this video shows how to use our newly developed software and perhaps give you viewers an insight of developing a GUI with C.......Sorry for not uploading the code....I would really wish to but the code is nearly 1800 lines....I need to host a FTP server i i were to supply that code to everyone......Well don worry....I will be uploading it on skydrive and adding the link here as soon as the project gets evaluated and certified by Infosys........

Friday, August 20, 2010

Intel to acquire Texas Instruments Cable Modem Unit

Intel Corporation has announced that it has signed an agreement to acquire Texas Instruments' cable modem product line. The purchase enhances Intel's focus on the cable industry and related consumer electronics (CE) market segments, where the company's expertise in building advanced system-on-chip (SoC) products, based on Intel® Atom™ processors, will be applied.

Intel plans to combine Texas Instruments' best-of-breed Puma product lines with the Data Over Cable Service Interface Specification (DOCSIS) standard technology and Intel SoCs to deliver advanced set top box, residential gateway and modem products for the cable industry. The objective is to provide cable OEMs with an open and powerful platform for delivering innovative and differentiated products to service providers that improve the video, voice and data content experience at home.

"Adding the talents of the Texas Instruments' cable team to Intel's efforts to bring its advanced technology to consumer electronics makes for a compelling combination," said Bob Ferreira, general manager, Cable Segment, Intel's Digital Home Group. "Intel is focused on delivering SoCs that provide the foundation for consumer electronics devices such as set top boxes, digital TVs, Blu-ray* disc players, companion boxes and related devices. This acquisition specifically strengthens Intel's product offerings for the continuum of cable gateway products and reinforces Intel's continued commitment to the cable industry."
All employees of Texas Instruments' cable modem team received offers to join Intel at sites in their home countries, primarily Israel, and will become part of Intel's Digital Home Group. Additional terms of the transaction were not disclosed. The agreement is subject to regulatory review and customary closing conditions. It is expected to close in the fourth quarter of 2010.

Intel, the world's largest chip maker, is also a leading manufacturer of computer, networking and communications products. Additional information about Intel is available at www.intel.com/pressroom.

Intel to Acquire McAfee


NEWS HIGHLIGHTS

Intel Corporation has entered into a definitive agreement to acquire McAfee, Inc., through the purchase of all of the company's common stock at $48 per share in cash, for approximately $7.68 billion. Both boards of directors have unanimously approved the deal, which is expected to close after McAfee shareholder approval, regulatory clearances and other customary conditions specified in the agreement. The acquisition reflects that security is now a fundamental component of online computing. Today's security approach does not fully address the billions of new Internet-ready devices connecting, including mobile and wireless devices, TVs, cars, medical devices and ATM machines as well as the accompanying surge in cyber threats. Providing protection to a diverse online world requires a fundamentally new approach involving software, hardware and services. Inside Intel, the company has elevated the priority of security to be on par with its strategic focus areas in energy-efficient performance and Internet connectivity. McAfee, which has enjoyed double-digit, year-over-year growth and nearly 80 percent gross margins last year, will become a wholly-owned subsidiary of Intel, reporting into Intel's Software and Services Group. The group is managed by Renée James, Intel senior vice president, and general manager of the group.

"With the rapid expansion of growth across a vast array of Internet-connected devices, more and more of the elements of our lives have moved online," said Paul Otellini, Intel president and CEO. "In the past, energy-efficient performance and connectivity have defined computing requirements. Looking forward, security will join those as a third pillar of what people demand from all computing experiences.

"The addition of McAfee products and technologies into the Intel computing portfolio brings us incredibly talented people with a track record of delivering security innovations, products and services that the industry and consumers trust to make connecting to the Internet safer and more secure," Otellini added.

"Hardware-enhanced security will lead to breakthroughs in effectively countering the increasingly sophisticated threats of today and tomorrow," said James. "This acquisition is consistent with our software and services strategy to deliver an outstanding computing experience in fast-growing business areas, especially around the move to wireless mobility."

"McAfee is the next step in this strategy, and the right security partner for us," she added. "Our current work together has impressive prospects, and we look forward to introducing a product from our strategic partnership next year."

"The cyber threat landscape has changed dramatically over the past few years, with millions of new threats appearing every month," said Dave DeWalt, president and CEO of McAfee. "We believe this acquisition will result in our ability to deliver a safer, more secure and trusted Internet-enabled device experience."

McAfee, based in Santa Clara and founded in 1987, is the world's largest dedicated security technology company with approximately $2 billion in revenue in 2009. With approximately 6,100 employees, McAfee's products and technologies deliver secure solutions and services to consumers, enterprises and governments around the world and include a strong sales force that works with a variety of customers.

The company has a suite of software-related security solutions, including end-point and networking products and services that are focused on helping to ensure Internet-connected devices and networks are protected from malicious content, phony requests and unsecured transactions and communications. Among others, products include McAfee Total Protection™, McAfee Antivirus, McAfee Internet Security, McAfee Firewall, McAfee IPS as well as an expanding line of products targeting mobile devices such as smartphones.

Intel has made a series of recent and successful software acquisitions to pursue a deliberate strategy focused on leading companies in their industry delivering software that takes advantage of silicon. These include gaming, visual computing, embedded device and machine software and now security.

Home to two of the most innovative labs and research in the high-tech industry, Intel and McAfee will also jointly explore future product concepts to further strengthen security in the cloud network and myriad of computers and devices people use in their everyday lives.

On a GAAP basis, Intel expects the combination to be slightly dilutive to earnings in the first year of operations and approximately flat in the second year. On a non-GAAP basis, excluding a one-time write down of deferred revenue when the transaction closes and amortization of acquired intangibles, Intel expects the combination to be slightly accretive in the first year and improve beyond that.

Intel was advised by Goldman Sachs & Co. and Morrison & Foerster LLP. McAfee was advised by Morgan Stanley & Co. Inc. and Wilson Sonsini Goodrich & Rosati, P.C.

Intel, the world's largest chip maker, is also a leading manufacturer of computer, networking and communications products. Additional information about Intel is available at www.intel.com/pressroom.


Rest of the story......

Well lemme tell you about the rest of the story......

The next day i happened to go there with many of my classmates...Many were really bugged up by the technology there but let me tell you one thing....its a place on earth where every embedded system designer would wish to be.....
Let me get into the concepts part rather than the boring story........
Again i am going to divide this into say 3 units to be precise. Each one is gonna give you, the readers i mean, a brief outlook of Microchip's payed classes which was charged at 10$ per class...Be happy that you are getting it for free.I mean you all are making a profit of 30$ and also the pleasure of sitting back rocking your chair where ever you are just by this blog.....wish i had someone who could do the same.....

The first part is about Microchip's enhanced PIC 16 microcontroller and their pro-C compiler for programming their PIC controllers. The second part is about USB connectivity for embedded systems design which again was one of their lectures for 10$. The last one is the very interesting one which is harvesting the solar energy to power up the main electrical grid....It was like a product info class but still the concepts were good though obsolete...

1) Coming to the Enhanced PIC16 microcontrollers, PIC controllers follow the Harvard architecture unlike the Von-Neumann architecture which follows the method wherein there is a common memory space for both data and code... In case of Harvard architecture, the code and memory areas are different which owes its speed compared to the other architecture..also adding coding complexities to the system.....
I noted some features of PIC16 in my notebook during the lecture...Hope some of them would be useful..


  • Enhanced Mid-range Core with 49 Instruction, 16 Stack Levels
  • Flash Program Memory with self read/write capability
  • 96 LCD segment drive support
  • Internal 32MHz oscillator
  • Integrated Capacitive mTouch Sensing Module
  • MI2C, SPI, EUSART w/auto baud
  • 3 ECCP & 2 CCP (Enhanced/Capture Compare PWM)
  • Comparators with selectable Voltage Reference
  • 14 Channel 10b ADC with Voltage Reference
  • 25mA Source/Sink current I/O
  • Four 8-bit Timers (TMR0/TMR2/TMR4/TMR6)
  • One 16-bit Timer (TMR1)
  • Extended Watchdog Timer (EWDT)
  • Enhanced Power-On/Off-Reset
  • Brown-Out Reset (BOR)
  • In Circuit Serial Programming (ICSP)
  • Wide Operating Voltage (1.8V – 5.5V)
  • Low Power PIC16LF1939 variant (1.8V – 3.6V)
  • You can just google out the specifications in the google searc tab which i have provided in my blog at the bottom.....
    I even have a google book embedded in the books store of my blog...Fell free to visit them....
    Coming to Pro-C compiler, here are some snapshots which i managed to rip up from their website... I mean they only told the entire C language fundamentals... Not much stress was laid on the core of the topic itself....Funny........
    • Linker

      • Added a new linker command-line switch, --sort-section name|alignment, to sort sections by section name or maximum alignment.
      • Added SORT_BY_NAME and SORT_BY_ALIGNMENT to the linker script language to permit sorting sections by section name or section maximum alignment.
      • New switch: --print-gc-sections to list any sections removed by garabge collection.
      • Addded a new command-line option '--default-script=FILE' or '-dT FILE' which specifies a replacement for the built-in, default linker script.
      • Linker scripts support a new INSERT command that makes it easier to augment the default script.
      • Linker-script input-section filespecs may now specify a file within an archive by writing "archive:file".
      • The --sort-common switch now has an optional argument which specifies the direction of sorting.
      • The Linker sources are now released under version 3 of the GNU General Public License.
     
    • Binary Utilities


      • pic32-readelf can now display address ranges from .debug_range sections. This happens automatically when a DW_AT_range attribute is encountered. The command line switch --debug-dump=Ranges (or -wR) can also be used to display the contents of the .debug_range section.
      • pic32-objcopy recognizes two new options, --strip-unneeded-symbol and --strip-unneeded-symbols, namely for use together with the wildcard matching the original --strip-symbol/--strip-symbols provided, but retaining any symbols matching but needed by relocations.
      • Added -g/--section-groups to pic32-readelf to display section groups.
      • Added --globalize-symbol and --globalize-symbols switches to pic32-objcopy to convert local symbols into global symbols.
      • Added -t/--section-details to pic32-readelf to display section details
      • Added -W/--dwarf to pic32-objdump to display the contents of the DWARF debug sections.
      • Added -wL switch to dump decoded contents of .debug_line.
      • Added -F switch to pic32-objdump to include file offsets in the disassembly.
      • Added -c switch to pic32-readelf to allow string dumps of archive symbol index.
      • Added -p switch to pic32-readelf to allow string dumps of sections.
      • The Binutils sources are now released under version 3 of the GNU General Public License.




      2) Coming to the USB part, ill try to brief you with some of the main concepts which i know and learnt there....USB is a mechanism by which the host can access resources by the method of PnP or Plug n  Play....They are generally hot swap devices which means that you have no need to power down your system to remove and deallocate the resource handles. Let me explain this with a scenario. Consider your own PC. This has a USB port. This single USB has the ability to support a maximum of 127 devices. Dont expect that you can use a hub and multiply this number. In USB all devices acts as nodes as if they were directly connected to the host or in this case your PC and it doesn't matter how many hubs are placed between your PC and your device. The PC will not bother to differentiate this as a hubbed connection or as a stand alone connection( direct connection). When you see your USB port i.e the one on your PC, it is a standard A type USB interface. Always the host will have a A type connector. There is a reduced size version of the connector called as the B connector which you might see on many target boards. There is another flat type yet same length as that of the A type is called as the mini-B type.Same length but shorter width...........Now there is another type in fact the last one called as the micro B which you would generally find on your mobile phones. Let me get into the hardware details. the USB cable are of 3 categories based on the speed of data transmission.....The first is the USB 1.1, then are USB 2.0, USB 3.0 which happens to be the latest high speed stuff....The cable will have 4 wires of which two are related to power and other two related to data transmission.There is actually no need for focusing anything beyond USB1.1 as the embedded systems need this USB mainly for the purpose of burning the code onto the RAM of the smart device and not for communicating with the host always...So for the purpose of writing the program onto the flash chip, USB 1.1 would do a lot at the present day......Generally there will be a USB interface controller to synchronize with the target device. This can be a dedicated controller or can be any microcontroller with a USB support doing the job of the USB controller...More details regarding this can be found in the two books which i have embedded in the books store.....

      3) Lastly i would wish to tell about the solar energy harvesting concept that they have developed...Pretty simple.....U have a solar panel which would harvest nearly 100W power...you feed this to a buck-boost inverter circuit...a pretty complex one...this produces the required voltage which gould drive back the current into the grid which inturn means charging the main electrical grid...there has been many incentives by the government ...specifically in germany, there is an incentive of 0.39$ per watt generation of power fed into the grid.....coming back to the circuit, the buck-boost voltage is then sensed by a microcontroller ( a PIC controller ). The controller also senses the main line voltage and phase...by matching the inverter's phase to that of the main line, the power is forced into the grid....thereby you can actually make the electric usage meter to run back........There is a wide scope for development in this sector...lot of government incentives also provided....Hope u may take this development as your career sometime.......


    Tuesday, August 17, 2010

    First Day Experience in ESC India 2010

    Well let me just get into the topic of what i learnt on the first day by attending the ESC seminar rather than telling the colourful details of my experience.
    Let me divide this discussion into 2.In the first part i will be explaining as well as providing you with some links about MEMS and its applications where you can get a new feel of todays technology. In the second part i will be talking about the MSP430 microcontroller of Texas Instruments.


    1) MEMS are the markers of technical development in the electronic industry. It stands for Micro Electro-Mechanical Systems. If you wish to create a touch sensor or basically a pressure sensor, generally a capacitor sensor would be the traditional idea. In case of the capacitive sensors, one end of the capacitor plate will be generally fixed and the other end will be fixed to the surface where you would wish to measure the pressure, force or even the touch effect. So this plate experiences mechanical force and then moves. So the distance between the capacitor plates changes. Suppose a particular voltage is applied across the capacitor before the displacement could occur, there will be certain voltage V (say). When the plates move, the capacitance of the capacitor will change. This will cause the previously applied voltage across the capacitor to change as well. This change in voltage is then fed to a ADC(Analog to digital convertor) which later when fed to some logical circuit or smart chips like microprocessors or microcontrollers can determine the exact force applied or even the pressure. But the latest MEMS will have these mechanical structures etched out right on the silicon wafers of the chip. So there is no need to arrange a capacitor - ADC setup for getting the converted signals from the sensor. The entire setup can be present in a single chip which can also be designed to even give a processed value of force, pressure etc...
    Nowadays, these MEMS devices are used to create accelerometers which measures the acceleration, also a good gyroscope to measure the rotational inertia, motion sensors which can sense motion in 3 dimensions is possible.
    For more details refer my books collection page which has 2 books on MEMS. The first one is really good as it mainly focuses on application oriented concepts. I have read it and you can rely on me and start with it if you do wanna start reading some books on MEMS. The second book is on how to develop MEMS. More of physics and material chemistry...Dont curse me if you find it uninteresting.


    2) Coming to the MSP430 microcontrollers, it is a low power highly efficient RISC processor core controller from Texas Instruments. It works at really low powers. Works in the input voltage range of 1.8 - 3.6 V. There are many advanced features like it supports JTAG, has a watchdog timer inbuilt, great compatibility....
    Texas Instruments has a great product support for this product. Kindly visit this link for more and authoritative information
    www.ti.com/msp430

    Tuesday, August 3, 2010

    ESC Bangalore 2010

    Hello folks.......
                 
                  Sorry for not publishing any posts for so many days.....Well in this post i would like to tell about the ESC Bangalore conference that was held back in July during 21 - 23....
                 
                 Firstly i would like to tell what ESC is. ESC { www.esc.com } stands for Embedded Systems Conference which is a global group who mainly perform the job of conducting various conferences related to electronics and Embedded systems development all over the globe. This time in India, it was conducted in Bangalore in NIMHANS Convention Center. This was sponsored by many leading companies like Microchip, STMicroelectronics, Greenhills Software and organized by UBM India.....
    The main focus of this conference was Developing ARM processors, Debugging tools and techniques, Multi-core, Multi-Threading and virtualisation, Multimedia and signal processing, RTOS, System integration and testing. Many lecture sessions were conducted for the conference delegates by some of the industry leads like Michael Barr who is a globally recognized expert on design of embedded computer systems, Clive Maxfield whose expertise is in the field of ASIC development and has authore many books, Robert Oshana a senior member of IEEE, Bob Ziedman the founder of Ziedman consulting with a prime focus on RISC and the list goes on......There were many exhibit stalls hosted by nearly 22 companies......
                 
                  Let me tell you about my experience about the Conference...I will divide this into 3 posts so that it would be easy to access and read....